If you are shopping for colocation space today, the spec sheet you used three years ago is obsolete.
For the last decade, buying data center space was mostly about square footage and connectivity. The power density per rack was usually 4 to 8 kW, and almost any tier-III facility could handle it.
But AI training and inference workloads place unprecedented demands on physical infrastructure. A new generation of GPU clusters demands much higher power density per rack and different approaches to cooling and airflow.
For procurement teams and CIOs evaluating colocation partners, this creates a challenge. Nearly every provider now claims to be โAI-ready,โ but very few are built to support high-density workloads at scale. The only reliable way to tell the difference is to ask questions that expose what a facility can actually handle.
Here are the three most important questions to ask during your next site tour, and the answers to look for.
Question #1: Can You Support 50 kW+ Per Cabinet?
Most traditional colocation environments were designed for workloads drawing between 4 and 8 kW per rack. AI training and inference clusters built around advanced GPUs routinely exceed 50 kW per cabinet, and they do so continuously. This sustained demand stresses power distribution, cooling, and airflow simultaneously.
When evaluating a provider, the key word is sustained. Can they support 50 kW and beyond without hot spots, throttling, or special exceptions? Or does that number only exist in theory, dependent on neighboring cabinets staying empty or workloads being carefully staged?
If the answer comes with qualifiers, workarounds, or hesitation, that facility was likely not built for AI workloads from the ground up.
Question #2: Do You Use Containment Pods to Manage Airflow?
As rack density increases, airflow becomes just as critical as raw power delivery.
In legacy data halls, cold air is pushed into the room and hot air is extracted at a distance, relying on mixing and volume to regulate temperature. At sustained rack densities of 50 kW and beyond, that approach breaks down quickly. Hot and cold air mix unpredictably and cooling systems are forced to work harder to maintain safe operating temperatures.
Cold-air containment pods โ physical enclosures built around rows of racks โ address this problem directly. By isolating cold supply air and controlling its path through each cabinet, containment prevents hot and cold air from mixing, stabilizes temperatures, and enables consistent performance even at very high rack densities.
Facilities that rely on open-floor airflow or ad hoc containment strategies will struggle to scale AI deployments reliably. If a provider cannot clearly explain how containment is implemented and why it matters, they are unlikely to support high-density AI workloads at scale. Hesitation signals that airflow is being managed by workaround rather than by design.
Question #3: Is Your Cooling Localized or Room-Based?
Room-based HVAC systems were designed for environments where heat is spread evenly across the data hall. AI workloads donโt behave that way. They generate intense, sustained heat at the rack level, concentrated in a small physical footprint.
Localized cooling solutions, such as rear-door heat exchangers mounted directly on the back of server cabinets, capture and dissipate heat immediately as it exits the server. Removing heat where itโs generated reduces strain on the broader cooling environment and makes sustained, high-density deployments viable.
Simply increasing airflow in a room designed decades ago is not sufficient. If a providerโs cooling strategy is still centered on the room rather than the rack, they are operating at the limits of what their infrastructure can support. An inability to clearly explain how heat is removed at the cabinet level is a strong indicator that the facility was not designed for modern AI workloads.
What These Answers Reveal, and Why Location Matters
Taken together, these three questions reveal whether a colocation provider was designed for high-density AI workloads or is attempting to retrofit legacy infrastructure to meet modern demands.
That distinction matters everywhere, but it matters even more in dense urban markets like New York, where space, power, and cooling constraints are amplified. Facilities must deliver hyperscale-level density within city limits, without the luxury of sprawling campuses or unlimited utility capacity.
DataVergeโs facility at Industry City is designed to support 50+ kW high-density cabinets, uses cold-air containment pods to control airflow, and deploys rack-level heat capture to manage sustained AI workloads efficiently. As one of the few carrier-neutral colocation data centers in Brooklyn, we enable customers to deploy high-performance infrastructure close to users, networks, and cloud on-ramps without compromising on density or reliability.
If your current provider struggles to answer these questions clearly, it may be time to reassess what โAI-readyโ really means, and start a different conversation.