PART X Data Center Specific
§35 / 32

Liquid Cooling Electrical

DLC · RDHx · immersion · CDU · busway · UPS implications

Air cooling stops at ~ 20 kW/rack. AI/HPC workloads need 30-100+ kW/rack. Liquid cooling becomes mandatory — and it changes the electrical design significantly. Branch circuits become busway. UPS topology shifts. Server power density increases.

Why Liquid Cooling Now

Air cooling worked great when servers drew 5-10 kW per rack. Modern AI/HPC workloads (GPU clusters) push densities to 30-100+ kW per rack. Air cannot remove heat at this density without impractical airflow rates. Liquid cooling becomes mandatory.

Cooling typeMax rack densityTypical PUEIndustry use
Traditional CRAC + raised floor~ 10 kW/rack1.6-2.0Legacy DCs
Hot aisle / cold aisle containment~ 20 kW/rack1.4-1.6Standard modern DCs (Atlas DC1)
In-row cooling~ 30 kW/rack1.3-1.5Mid-density colocation
Rear-door heat exchanger (RDHx)40-50 kW/rack1.2-1.3Higher-density traditional + early AI
Direct liquid cooling (DLC) — cold plates50-100+ kW/rack1.1-1.2NVIDIA H100/H200 clusters, custom AI accelerators
Immersion cooling — single-phase100-200+ kW/rack1.05-1.10Hyperscale AI training (Microsoft, Meta)
Immersion cooling — two-phase200-400+ kW/rack1.02-1.05Cutting-edge research (3M Novec)

Liquid Cooling Architectures

Rear-Door Heat Exchanger (RDHx)

A liquid-cooled coil mounted on the back of the rack. Hot exhaust air passes through the coil before returning to the room — heat transferred to chilled water. Server fans still push air; servers remain air-cooled.

AspectRDHx detail
Cooling capacity30-50 kW per rack typical
Server modificationsNone — works with stock air-cooled servers
PlumbingEach rack needs supply + return chilled water connections
Failure modeIf coil fails, hot air dumps into room — adjacent racks may overheat
Water leakage protectionDrip pans + leak detection sensors at rack level
Electrical impactNone directly — server power same as air-cooled
Best forDensity bumps without liquid in IT room (water still in coil only)

Direct Liquid Cooling (DLC) — Cold Plates

Coolant circulated through metal cold plates mounted directly on hot components (CPU, GPU, memory). Coolant absorbs heat at the chip and carries it to a CDU (Coolant Distribution Unit) that exchanges with facility chilled water.

AspectDLC detail
Cooling capacity50-100+ kW per rack
Server modificationsRequired — server vendor builds with DLC option (NVIDIA HGX H100, Intel Xeon Max, AMD EPYC liquid)
Coolant typesTreated water (most common), water-glycol, dielectric fluids (3M Novec 7000)
CDU (Coolant Distribution Unit)Heat exchanger between server-side coolant loop and facility chilled water; pumps + filtration
ManifoldsPlumbing inside each rack distributes coolant to server cold plates
Quick disconnectsDrip-free quick disconnects allow server pull/swap without draining the loop
Electrical impactReduces server fan power → ~ 5-10% IT power reduction → also reduces total facility power
Adoption (2026)Standard for new AI deployments; retrofit of air-cooled facilities is complex

Immersion Cooling

Servers fully submerged in dielectric fluid. Fluid removes heat directly from all components. No fans, no dust, no humidity issues. Single-phase keeps fluid liquid throughout; two-phase boils at chip temperature for higher heat transfer.

AspectSingle-phase immersionTwo-phase immersion
CoolantMineral oil, synthetic dielectric (Engineered Fluids ElectroSafe)3M Novec 7000-series (boils at 34-61°C)
Heat transferConvectionPhase change (boiling) — higher coefficient
Density100-200 kW/rack200-400+ kW/rack
PUE~ 1.05-1.10~ 1.02-1.05
Server modificationsRemove fans, replace thermal paste with immersion-rated, optionally remove HDDs (use SSDs only)Same + heat-spreader plates on chips for boiling surface
AdoptionGrowing (research + early hyperscale)Limited (cost + complexity)
ConcernFluid procurement, disposal, environmental (PFAS regulations on Novec)Same + 3M discontinuing some Novec products

Electrical Implications of Liquid Cooling

ImplicationDetail
Higher rack power → busway not branch circuitsAt 50-100 kW/rack, conventional branch circuits become impractical. Use bus duct (NEC 368) running down each row with plug-in tap-offs at each rack.
CDU electrical loadEach CDU has its own pump (10-30 kW typical) — adds to mech load, fed from PDU
Reduced server fan power~ 5-10% reduction in IT power (server fans gone or minimal). Improves PUE.
Different IT redundancy modelDLC servers cannot tolerate even brief power loss — coolant pump must continue. Requires UPS for both server AND CDU.
Leak detectionRequired at every CDU + manifold + rack. Tied to BMS for alarms; some systems auto-shutoff valve on leak.
Plumbing-electrical separationWater near electrical = bad. Code-compliant separation (NEC 110.26 working space, drip pans, sub-floor drainage)
Hot water reuseDLC return water at 35-50°C is hot enough for building heat reuse — improves ERE metric
Facility chilled water temp can be HIGHERAir-cooled DC needs 7°C chilled water. DLC works with 30-40°C — enables free cooling year-round in moderate climates

Worked Example — Atlas DC1 Future AI Hall

Example · Atlas DC1 spineRetrofitting one of Atlas DC1's IT halls for AI workloads with DLC

Current state (one row of Atlas DC1 IT Hall A)

Existing density
12 kW/rack × 104 racks = 1.25 MW
Cooling
Cold aisle containment, CRAH air-cooled
Branch circuits
42-circuit RPP per row, 30A branches

AI conversion target

Target density
60 kW/rack with DLC
Target rack count
21 racks (instead of 104) — 60 × 21 = 1.26 MW (same total)
Cooling
DLC with CDUs for each rack pair
Power distribution
2,000 A busway down row instead of 42-circuit panel

Electrical changes required

  1. Replace RPP with busway. Existing 400 A panelboard → install 2,000 A busway (Square D Powerlink or Eaton Pow-R-Line) along ceiling of row
  2. Plug-in switches per rack. Each rack gets 100 A plug-in fused disconnect (60 kW × 1.25 = 75 kVA / 415V × √3 = 104 A)
  3. Re-route CDU power. Each CDU draws ~ 20-30 kW; fed from same busway or separate PDU
  4. UPS sizing. Total row load = 21 × 60 kW + 11 CDU × 25 kW = 1,535 kW. Existing UPS-A1 sized for 1,250 kVA — undersized for AI conversion. Need to upsize UPS or split row across both UPS sides.
  5. Plumbing. Run chilled water mains with 30°C supply (warmer than existing 7°C — can reduce chiller energy)
  6. Leak detection. Add water sensors under raised floor per rack. Auto-shutoff valves at row level.
  7. Cost estimate for one row conversion: $500K-1M (busway, plumbing, CDUs, leak detection, network upgrade, sub-flooring)
i
Why retrofitting is expensive vs greenfield AI DC
A purpose-built AI data center is designed from day one for liquid cooling. Atlas DC1 was built for air. Retrofitting requires running plumbing through existing finished space, replacing distribution equipment, and disrupting operations. Modern hyperscale AI campuses are built liquid-cooled from the start.

If You See THIS, Think THAT

If you see…Think / use…
"DLC" (Direct Liquid Cooling)Cold plates on chips. 50-100 kW/rack. Most common modern AI cooling.
"RDHx" (Rear-door heat exchanger)Coil on back of rack. 30-50 kW/rack. Air still flows through servers.
"Immersion cooling"Servers submerged in dielectric fluid. 100-400 kW/rack.
"CDU" (Coolant Distribution Unit)Heat exchanger + pump between server-side loop and facility chilled water
"Quick disconnect"Drip-free coupling allowing server pull without draining loop
"Two-phase immersion"Coolant boils at chip surface (Novec). Highest density. Newest.
"Bus duct" / "busway" in IT hallsFor 30+ kW/rack. Branch circuits don't scale that high.
"Chilled water 30°C return"DLC enables this. Massive PUE improvement vs traditional 7°C.
"PFAS regulations on Novec"Two-phase immersion fluids facing regulatory pressure (3M phasing out)
"Heat reuse" in DC contextDLC return water hot enough to heat adjacent buildings