Analyzing Data Center Flexibility to Meet Growing Power Demands

Back to News & Articles

A Grid Forward Forum podcast episode with Cloverleaf Infrastructure and Iron Mountain Data Centers

There are many approaches being tried to meet the rapidly growing power demand for AI-focused data centers, from small nuclear reactors (SMRs) to behind-the-meter natural gas plants. One concept growing in importance is flexibility. But how can very large data centers become a flexible resource on the electric grid? On the Grid Forward Forum podcast, Bryce Yonker spoke to two experts at the forefront of this concept: Chris Pennington, Director of Energy & Sustainability for Iron Mountain Data Centers, and Brian Janous, Co-Founder of Coverleaf Infrastructure.

Read the excerpts below to learn how they look at how data center flexibility on the grid. And listen to the entire podcast on our website or your favorite podcast app, including Apple, Spotify and YouTube.

You can also hear Brian Janous speak in person on the panel “Optimally serving data centers: Timely and clean(er) energy” at GridFWD 2025, Oct 7-8 in Monterey, CA. Focusing on “AI and the Grid,” GridFWD 2025 is gathering over 100 leaders from data centers, AI innovators and utilities to dig deep into how the grid must advance to support the wave of massive data centers, and how AI can help utilities modernize their planning and operations. Download the Program here, and register soon – GridFWD events sell out.

This transcript was AI generated and edited for clarity and conciseness


Bryce Yonker: For the increasing load challenges, what does the solution look like? Chris, what you guys are trying to deploy?

Chris Pennington: So you’ve got this real focus around AI expansion and development, our federal government, for example, saying this is a priority. Everybody’s having a race to build out capabilities now to be in the front of that and not be stuck having to buy AI services from everybody else. On the other side, you got the utilities and the grid operators who are not built for speed. So they’re taking a look at the billions of dollars of infrastructure upgrades and investments and contracting for long-term power generation and saying, “how do I protect myself from a risk standpoint?” If ten data centers come in and they all want 500MW, how do I get comfortable with the fact that they’re actually going to use that? Because the utility has to go out and buy this power and commit to it in advance in order for us to show up and use it.

One of the things that keeps emerging throughout that conversation is this word flexibility. How do we think about the load from data centers and other industrial users being part of the solution set here? There are times when the load on the grid is very low, but we have to build capabilities in the grid to where we can keep the lights on under full load. So this concept of flexibility is the emergent tool that’s coming out and saying, “okay, how do we help the load be part of the equation, not just rely on utilities to solve all of our problems for us?”

Bryce Yonker: What do we mean by flexibility for the data center space? Is this something different than demand response (DR) that’s been around for 20 or 30 years, specific to this type of a sector?

Chris Pennington: Flexibility is effectively the ability to take a data center load, which is really very steady and always ramping up for the most part, and being able to fluctuate it, modulate it so that when the grid is under duress, you’ve got the ability to shed load.

Now, today, flexibility for a data center only exists in a couple of ways. There’s this concept that you could shift when work is done for the data center. Google has done some impressive work illustrating how that works.

There’s another way: we can just run our onsite diesel generators that could totally carry the load of the data center. Then the third way that you can create flexibility, which is one that we’re exploring pretty heavily, is including more energy storage on site. The data center does what it needs to do, and you got a battery in between us and the grid. So when the grid needs flexibility, the battery provides that flexibility by shedding what we’re pulling from the grid.

Bryce Yonker: Brian, is this a different way of thinking about flexibility than before, or similar just with a new audience?

Brian Janous: It’s a little different [than] a traditional DR provider. Their business was really creating cost savings opportunities for end user… But I don’t think that they really looked to their business as about capacity enablement and acceleration.

That’s the pivot as we talk about flexibility today. What we’re saying is we need to connect anywhere from 30 to 60GW of new data center capacity in the US grid by 2030. But where’s that capacity going to come from? It’s capacity that we’re talking about.

This is where a lot of data center operators miss what’s going on here because they look at it as “I’m going to connect 24/7 data center, so I need a 24/7 load to supply it.” And hence all the interest in nuclear and behind the meter gas plants. But they’re not really solving the actual problem. They solve the problem but they’re way over engineering it… because you don’t need a baseload generator to match every baseload resource we put on this grid.

As Chris alluded to earlier, there’s plenty of power most of the time. What we’re really trying to do is manage those peak periods: the hottest summer days, the coldest winter nights. That’s why I think this focus on flexibility is so important because it recognizes that what we’re trying to solve for is how do we manage those peak periods to enable these loads to connect faster.

See Brian Janous live at GridFWD 2025, Oct 6-8, Monterey, CA

Chris Pennington: That’s exactly right. It is all about speed in the data center space right now and how do you connect faster. Part of the way you get there is by optimizing the capacity that’s already out there. Brian gives some great examples there. There was a really great article and work done by Tyler Norris at a Duke University Nicholson School [Rethinking Load Growth]  that said, “if you can just curtail load or create flexibility for a very small amount of time, you can unlock very significant amounts of capacity on the grid.” So that is why flexibility is so important right now, it’s the ability to unlock available capacity and utilize that and optimize the grid.

Brian Janous: If we weren’t recording this at 10:00 in the morning, we could have had a drinking game for when someone would first mention Tyler Norris’s name in this conversation.

Bryce Yonker: So the idea that demand for data centers is 24/7, is this changing at all? Where is the flexibility coming from? How do we get there?

Brian Janous: I don’t think you’re going to see a lot of flexibility coming from the workloads themselves. There are certainly scenarios and Google’s been one who’s talked a lot about it… But I think that’s still going to be very small, even for the training workloads… it’s single digit percentages of data center workloads that you actually see flexing in response. Part of the reason why is these data centers and the servers that are going into them are so incredibly valuable. The last thing you want to do is build a $30 billion data center, and then use it as a demand response machine. That’s not what it exists for. I think that people are overestimating the degree to which there’s even interest in doing that.

However, data centers are by their very nature microgrids that have backup generators, they have batteries, they have the ability to leverage those resources, which we did a lot at Microsoft and found ways to leverage that capacity to provide back to utility as flexibility or ancillary services. [Data centers] have the ability to invest in longer duration storage on-site so if a utility required some sort of curtailed ability, they could build batteries in the parking lot. All of that is a far easier hurdle to jump over than convincing thousands of software engineers across the company that they need to completely re-architect their software to allow you to turn the data center off because it’s hot in Texas. I just don’t think that’s going to happen at any real scale.

Chris Pennington: Completely agree, enthusiastically. Like, I think that it’s important to recognize the landscape of these flexibility options, moving work around either geographically or temporally… But the ability to have scale on this in a way that is acceptable to data center operators is really through the path of on-site energy storage with batteries. Part of that is driven by recognizing what the grid needs, the grid ultimately will want to be able to throttle that themselves to a very large degree. So for that to work with the data center, as Brian said, you’re not going to say, “Hey, the utility is saying it’s hot in Texas, so everybody turn down their stuff.” That’s not going to work.

But if you have a battery, as we’re deploying at a couple of our sites, including the plants in Virginia, it’s totally disconnected from the power systems downstream. All of our critical infrastructure is not impacted by how that battery operates, so that makes that battery a grid resource in a way that’s very flexible. And data centers are ideal locations for energy storage because they are large loads. So you put a big battery, next to a big load, you can do big things.

They’re also highly secure. You know, you don’t have people just kind of walking around the place that aren’t supposed to be there. So this is something that I really think has the potential to solve big problems at scale.


Listen to this entire Grid Forward Forum episode on this website or your favorite podcast app.

Related Posts