Air cooling stops at ~ 20 kW/rack. AI/HPC workloads need 30-100+ kW/rack. Liquid cooling becomes mandatory — and it changes the electrical design significantly. Branch circuits become busway. UPS topology shifts. Server power density increases.
Why Liquid Cooling Now
Air cooling worked great when servers drew 5-10 kW per rack. Modern AI/HPC workloads (GPU clusters) push densities to 30-100+ kW per rack. Air cannot remove heat at this density without impractical airflow rates. Liquid cooling becomes mandatory.
Cooling type
Max rack density
Typical PUE
Industry use
Traditional CRAC + raised floor
~ 10 kW/rack
1.6-2.0
Legacy DCs
Hot aisle / cold aisle containment
~ 20 kW/rack
1.4-1.6
Standard modern DCs (Atlas DC1)
In-row cooling
~ 30 kW/rack
1.3-1.5
Mid-density colocation
Rear-door heat exchanger (RDHx)
40-50 kW/rack
1.2-1.3
Higher-density traditional + early AI
Direct liquid cooling (DLC) — cold plates
50-100+ kW/rack
1.1-1.2
NVIDIA H100/H200 clusters, custom AI accelerators
Immersion cooling — single-phase
100-200+ kW/rack
1.05-1.10
Hyperscale AI training (Microsoft, Meta)
Immersion cooling — two-phase
200-400+ kW/rack
1.02-1.05
Cutting-edge research (3M Novec)
Liquid Cooling Architectures
Rear-Door Heat Exchanger (RDHx)
A liquid-cooled coil mounted on the back of the rack. Hot exhaust air passes through the coil before returning to the room — heat transferred to chilled water. Server fans still push air; servers remain air-cooled.
Aspect
RDHx detail
Cooling capacity
30-50 kW per rack typical
Server modifications
None — works with stock air-cooled servers
Plumbing
Each rack needs supply + return chilled water connections
Failure mode
If coil fails, hot air dumps into room — adjacent racks may overheat
Water leakage protection
Drip pans + leak detection sensors at rack level
Electrical impact
None directly — server power same as air-cooled
Best for
Density bumps without liquid in IT room (water still in coil only)
Direct Liquid Cooling (DLC) — Cold Plates
Coolant circulated through metal cold plates mounted directly on hot components (CPU, GPU, memory). Coolant absorbs heat at the chip and carries it to a CDU (Coolant Distribution Unit) that exchanges with facility chilled water.
Aspect
DLC detail
Cooling capacity
50-100+ kW per rack
Server modifications
Required — server vendor builds with DLC option (NVIDIA HGX H100, Intel Xeon Max, AMD EPYC liquid)
Coolant types
Treated water (most common), water-glycol, dielectric fluids (3M Novec 7000)
CDU (Coolant Distribution Unit)
Heat exchanger between server-side coolant loop and facility chilled water; pumps + filtration
Manifolds
Plumbing inside each rack distributes coolant to server cold plates
Quick disconnects
Drip-free quick disconnects allow server pull/swap without draining the loop
Electrical impact
Reduces server fan power → ~ 5-10% IT power reduction → also reduces total facility power
Adoption (2026)
Standard for new AI deployments; retrofit of air-cooled facilities is complex
Immersion Cooling
Servers fully submerged in dielectric fluid. Fluid removes heat directly from all components. No fans, no dust, no humidity issues. Single-phase keeps fluid liquid throughout; two-phase boils at chip temperature for higher heat transfer.
Aspect
Single-phase immersion
Two-phase immersion
Coolant
Mineral oil, synthetic dielectric (Engineered Fluids ElectroSafe)
2,000 A busway down row instead of 42-circuit panel
Electrical changes required
Replace RPP with busway. Existing 400 A panelboard → install 2,000 A busway (Square D Powerlink or Eaton Pow-R-Line) along ceiling of row
Plug-in switches per rack. Each rack gets 100 A plug-in fused disconnect (60 kW × 1.25 = 75 kVA / 415V × √3 = 104 A)
Re-route CDU power. Each CDU draws ~ 20-30 kW; fed from same busway or separate PDU
UPS sizing. Total row load = 21 × 60 kW + 11 CDU × 25 kW = 1,535 kW. Existing UPS-A1 sized for 1,250 kVA — undersized for AI conversion. Need to upsize UPS or split row across both UPS sides.
Plumbing. Run chilled water mains with 30°C supply (warmer than existing 7°C — can reduce chiller energy)
Leak detection. Add water sensors under raised floor per rack. Auto-shutoff valves at row level.
Cost estimate for one row conversion: $500K-1M (busway, plumbing, CDUs, leak detection, network upgrade, sub-flooring)
i
Why retrofitting is expensive vs greenfield AI DC
A purpose-built AI data center is designed from day one for liquid cooling. Atlas DC1 was built for air. Retrofitting requires running plumbing through existing finished space, replacing distribution equipment, and disrupting operations. Modern hyperscale AI campuses are built liquid-cooled from the start.
If You See THIS, Think THAT
If you see…
Think / use…
"DLC" (Direct Liquid Cooling)
Cold plates on chips. 50-100 kW/rack. Most common modern AI cooling.
"RDHx" (Rear-door heat exchanger)
Coil on back of rack. 30-50 kW/rack. Air still flows through servers.
"Immersion cooling"
Servers submerged in dielectric fluid. 100-400 kW/rack.
"CDU" (Coolant Distribution Unit)
Heat exchanger + pump between server-side loop and facility chilled water
"Quick disconnect"
Drip-free coupling allowing server pull without draining loop
"Two-phase immersion"
Coolant boils at chip surface (Novec). Highest density. Newest.
"Bus duct" / "busway" in IT halls
For 30+ kW/rack. Branch circuits don't scale that high.
"Chilled water 30°C return"
DLC enables this. Massive PUE improvement vs traditional 7°C.