Cooling is not just an operational detail in a data centre. It’s one of the largest cost drivers and a defining factor for sustainability. Research from the Uptime Institute and DOE suggests that 30–40% of a facility’s electricity still goes into thermal management. As AI and high-density workloads reshape the industry, the pressure to make cooling both efficient and environmentally responsible has never been greater.
Why the Cooling Burden Keeps Growing
The demand curve is steep. Global data centre energy use is projected to double by 2030, fuelled by hyperscale cloud, AI training clusters, and edge computing. That expansion puts traditional cooling systems under strain.
- Ten years ago, an average rack drew 2–5 kW. Today, densities above 30–50 kW are common, and experimental designs push beyond 100 kW per rack.
- Worldwide Power Usage Effectiveness (PUE) averages around 1.57, far above the 1.2–1.3 target that efficient operators strive for.
- Without intervention, higher rack densities mean hotter hotspots, greater risk of downtime, and escalating power bills.
The industry is now faced with a dilemma: how to keep equipment reliable without letting cooling costs spiral?
Where Efficiency Slips Through the Cracks
Cooling losses don’t come from one single weakness. They show up in several places:
- Airflow mismanagement. In legacy sites, cold air often bypasses servers entirely, with studies showing that over 60% of conditioned airflow is wasted. This creates uneven rack temperatures and unnecessary energy drain.
- Oversized infrastructure. Many systems were built to handle “peak load” conditions. Running at constant speed means they burn energy even when IT load is only partial.
- Blind operation. Without real-time data, fans and chillers often run at full output regardless of actual rack demand. That wastes both electricity and money.
These inefficiencies add up, especially at scale.
Strategies That Shift the Equation
- Containment design. Hot/cold aisle separation prevents mixing and allows supply temperatures to rise safely, cutting cooling energy by 15–20%.
- Economizers and free cooling. Facilities in cooler climates can reduce mechanical chiller use dramatically, achieving 10–25% annual savings and improving PUE by 0.1–0.2 points.
- Liquid cooling. Direct-to-chip and rear-door heat exchangers enable higher-density racks and reduce reliance on air systems, with efficiency gains of 25–40% in dense clusters.
- Variable-speed fans and smart coils. Upgrading from fixed-speed fans to EC or VFD models trims fan energy use by 20–35%, while improved coil design increases heat transfer.
Preparing Cooling Systems for What Comes Next
Forward-looking operators aren’t just chasing today’s savings. They’re building for resilience in the next decade:
- Transitioning to low-GWP refrigerants like R-454B to meet regulatory and sustainability expectations.
- Deploying rack-level sensors for predictive thermal control, not just reactive adjustments.
- Unifying CRAC, CRAH, and CDU platforms into integrated control systems to reduce silos.
- Using modular cooling pods to scale capacity with IT load instead of oversizing.
This is the blueprint many global providers are following. But location plays a role too. And in Canada, that’s where the story gets interesting.
How Canadian Climate Changes the Cooling Equation
Not every region can depend on free cooling, but Toronto sits in a uniquely favourable position. Long cold seasons allow for extended periods where fresh outside air can handle most of the cooling load. This reduces reliance on mechanical chillers and helps data centres achieve both lower PUE and lower operating costs.
Nuday Networks has leaned into this natural advantage. Over five years ago, it became the first Canadian colocation provider to adopt large-scale fresh-air cooling in its Toronto facility. By aligning with the climate rather than fighting against it, Nuday demonstrates a different model for sustainable efficiency.
Nuday’s Practical Approach to Sustainability
Cooling innovation is only one piece of Nuday’s environmental philosophy. Its operations focus on practical, measurable steps rather than speculative promises:
- Lower PUE through free-air cooling. By pulling in cool ambient air, Nuday reduces the need for traditional HVAC systems and passes the energy savings to its clients in the form of more competitive colocation pricing.
- Responsible hardware lifecycle. End-of-life equipment is handled with certified third-party recyclers to ensure secure, environmentally safe disposal and recycling. That prevents e-waste from ending up in landfills.
- Balanced economics. With oil and fuel costs volatile, energy efficiency directly protects clients from unpredictable utility bills. Nuday’s approach creates long-term cost stability.
- Realistic sustainability. Instead of making net-zero declarations, Nuday focuses on what can be done today: reduce cooling loads, handle hardware responsibly, and support client sustainability goals without inflating budgets.
The philosophy is simple: efficiency that matches nature, cost control that supports business, and stewardship that fits the realities of operating a data centre.
Closing Reflection: Efficiency and Responsibility Can Coexist
The global cooling challenge isn’t going away. With AI, edge, and hyperscale expansion, demands on thermal management will only intensify. Yet examples from Toronto show how climate, engineering, and pragmatic operations can work together.
By using Canada’s natural climate for fresh-air cooling and coupling it with responsible equipment management, Nuday has built a model where efficiency and environmental responsibility converge. It’s not about claiming future milestones. It’s about measurable, credible action today — and clients are the ones who benefit from both lower costs and greener infrastructure.
(Adapted from DataCenterKnowledge research, with additional context on Nuday Networks’ operations.)
