Electricity is an increasingly complex industry in the midst of transition to renewables and decarbonization. Using my 25 years’ experience as an engineer, policy analyst, and academic, I help my consulting clients think through their toughest technical challenges and formulate their best business strategies.
“Can resource adequacy be attained without defining what is ‘enough’?” This is the astute question posed by Beth Garza, formerly Independent Market Monitor for ERCOT and now senior fellow at the R Street Institute think tank. In this blog, I would like to engage with her question.
Customary short-hand descriptions of resource adequacy focus on installed reserve margin, which is the amount by which the total power generation capacity exceeds a forecast peak consumption. I will argue that, in a high renewable world, the focus on power capacity over a short time interval at the time of a forecast peak is not a suitable short-hand, because adequacy will become more dependent on the availability of energy over an extended time. The “what” in Beth Garza’s question will increasingly need to be thought of as energy capacity rather than power capacity, and we will need to define how much energy capacity is needed to satisfy our requirements for adequacy.
To understand this change in the needed short-hand for adequacy, let’s first think about assessing resource adequacy in systems with mostly thermal generation. Typically, load is at peak levels for just a few hours in summer or winter. In thermal-dominated systems, resource adequacy is roughly tantamount to having enough thermal generation capacity available with high enough probability to meet a particular future peak load condition. Outside of these peak load hours, there is generally sufficient capacity to meet load, even considering failures and the need for annual maintenance.
There are various considerations in an assessment of adequacy in a thermal-dominated system that hinge on uncertainties and probabilistic assessments. On the demand side, probabilistic assessments arise because we must forecast future peak load conditions, including the extremity of associated weather conditions that drive both winter and summer peaks. In other words, there is an uncertain future peak load, so the definition of resource adequacy must consider how extreme the peak.
To put it another way: in order to define whether resources are adequate, we must specify the forecast load that the resources are supplying. Implicitly, there is a non-zero probability that the actual realized peak load exceeds the forecast peak load. For example, peak loads in the February 2021 event in ERCOT exceeded the ERCOT assessment of forecast peak loads for winter 2020-2021, because the extreme weather that actually occurred in February 2021 was a once-a-decade phenomenon. The forecast peak considered only more typical winter peaks.
It is not just load that has randomness. Generators, too, have random failures. The assessment of resource adequacy must therefore also consider the probabilities of failure of thermal generation. Historical statistics are typically used to estimate generator failure rates.
Putting the demand and supply together, a specification of resource adequacy must define the minimum acceptable probability for being able to supply all load. This could equivalently be described as deciding how far out to consider on the “tail” of unlikely events of peak load variation and generator failures. Given the minimum acceptable probability of being able to supply all load all the time, we can assess whether or not the resources are adequate. A typical minimum acceptable probability of supplying all the load might be 99.97% over a year.
To summarize, the question about thermal resource adequacy typically comes down to a question about the likelihood of power production capacity being available to meet peak power consumption conditions. This assessment primarily depends on a particular, relatively small, length of time during load peaks and considers the probability distribution of power capacity in relation to the probability distribution of peak load. This is appropriate for a predominantly thermal system with “peaky” demand, where failures of thermal generation are uncorrelated from generator to generator, and where the critical demand periods are particular hours sporadically occurring over a summer or winter, with the occurrence of these peaks uncorrelated with generator outages.
So how can we evaluate whether there are enough thermal generation resources to satisfy our specification of adequacy? One approach is to define the concept of “effective load carrying capacity” (ELCC) of each generation resource. For thermal resources, if their failures are not correlated with demand conditions and the failures are uncorrelated across generators, then the ELCC can be roughly evaluated as the installed capacity of the generator derated (reduced) by its failure or forced outage rate. The derated capacity can be viewed as the “expected” availability in a probabilistic sense.
Adding up the capabilities of a large number of generators, and assuming that failures across generators are uncorrelated and that failure rates do not change over time, the law of large numbers tells us that the sum of the actual available capacities of all the generators will be roughly equal to the sum of these derated capacities. To put it another way, we can roughly think of the derated capacity as representing the capacity of an equivalent perfectly reliable generator that is always available. Adequacy is tantamount to having enough equivalent perfectly reliable generation capacity.
How can we interpret this in terms of installed reserve margin? Adding up the total installed capacity in a system with adequate generation, we will find that it exceeds the peak load forecast. That is, there is a reserve margin. Historically, an installed reserve margin of around 12 to 15% above the peak load forecast provided adequate capacity in a thermal-dominated system.
A more refined calculation considers the distribution of failures more carefully to evaluate the derated capacities. Stanford Professor Frank Wolak provides some examples in his paper “Long-Term Resource Adequacy in Wholesale Electricity Markets with Significant Intermittent Renewables.” If the sum of the derated capacities is sufficient, then the assessment is that supply would be able to meet load with a probability that is at least the minimum acceptable level. If not, then significant involuntary curtailment of load would be required or new generation should be built. In a resource adequacy context, the potential for significant curtailment would point to the need to build new generation before the season of these forecast load peaks with a view to increasing the reserve margin sufficiently.
A complication with this analysis relates to generator failure rates. In fact, “common mode events,” such as extreme cold or heat, can increase the failure rates of generators, as experienced in the ERCOT February 2021 event and therefore mean that there is some correlation of thermal generation failures with weather and load. This issue is discussed at length in an EPRI report and in my blogpost on the ERCOT event. In principle, this effect can be included or approximated in the analysis.
How do renewables change this situation? Unlike thermal generators, the availabilities of renewable resources are correlated from one resource to another and also highly correlated with weather conditions and load. When it is windy at one wind farm in west Texas, it is likely to be windy at most West Texas wind farms, and when it is not windy at one wind farm, it is likely to be not windy at most wind farms. This correlation means that the law of large numbers cannot be used in the same way as for thermal generation. It invalidates the idea of “adding together” derated capacities of individual wind farms since the availabilities are not independent across farms.
To consider these correlations, one approach is to consider the net load, the demand minus total renewable production. This necessitates forecasting simultaneous renewable production and demand, including the extremity of the weather conditions. Adequacy comes down to whether the thermal generation and storage can meet the forecast net load, bearing in mind that the time of the net load peak will differ from the time of the load peak.
Modern ELCC software can evaluate these situations. However, interpretation of the results for renewables is different to the notion of capacity derating for thermal generation. For a thermal generator, we expect that its ELCC will be roughly, although not completely, independent of what other resources are built or retired and roughly independent of how other resources are operated. This is consistent with the “rule of thumb” of a 12 to 15% installed reserve margin being adequate in a thermal system.
In contrast, the ELCC for a particular wind farm calculated for, say, the case of 30GW of installed wind capacity will be significantly lower than the ELCC calculated for the case of 20GW of installed wind capacity and the ELCC can depend significantly on the other available resources such as storage and how they are operated. As renewable penetration increases, the correlation of production across renewables implies that the ELCC per MW of installed capacity will decrease. For example, in a 2019 study by Energy and Environmental Economics (E3) of deep decarbonization for California, E3 expects ELCC for solar farms to fall from about 50% of farm capacity to around 1% of farm capacity as the penetration of solar increases significantly. This means that a particular level of installed reserve margin will no longer be a suitable short-hand for adequacy in a renewable-dominated system because the reserve margin necessary to achieve an acceptable level of adequacy will be highly dependent on the assets in the system.
System operators recognize this issue and can consider it in their calculations. Again, Professor Wolak provides a detailed explanation of the process. ELCC assessments of resource adequacy could then, in principle, use derated capacities of individual thermal resources together with a derated total renewable capacity to assess whether there will be enough available capacity at the time of a forecasted future peak net load. Professor Wolak details some of the serious technical difficulties in trying to apply ELCC in high renewable contexts.
Professor Wolak’s critique of ELCC applied to renewables and the discussion in various reports, including the E3 California report and work by the Energy Systems Integration Group, point to why adequacy cannot be captured in high renewable systems by power capacity concepts such as static levels of installed reserve margins. Installed reserve margins implicitly reference a peak load or peak net load condition; however, under high renewable penetration, adequacy is increasingly determined by supply-demand balance during the extended periods of low renewable production that the Germans call a dunkelflaute. While sophisticated simulations can reflect these sorts of energy constraints, translating them into capacity terms such as an ELCC or a required installed reserve margin then obscures the underlying energy issues.
Correlation of renewable production over multiple hours or days brings into question the whole power capacity focus of reserve margin assessment. It is not simply that there might be low renewable production during the particular hour or few hours of peak load or peak net load in a summer or winter and that we might or might not have enough available capacity for that hour or few hours. The issue is more serious: there might be low renewable production for many hours, or even days, resulting in a significant mismatch between the desired energy consumption and the available energy production.
This was illustrated by the February 2021 event in ERCOT, which incidentally also involved correlated outages of generation and natural gas supply due to cold weather. While winterization will reduce the coincidence of future outages of thermal generation and gas supply under cold conditions, and more generally reduce the prevalence of other issues that can be mitigated by winterization, it will not change the correlation between lowered renewable production and increased consumption, since these are due to the “common mode” event of the extreme weather event itself.
Consider a future repeat of a similar weather event to February 2021 in ERCOT. Let’s assume winterization of the thermal generation, gas supply, water supply, and other components has been accomplished. And, suppose similar weather conditions occur in a future ERCOT with much higher penetration of renewables.
ELCC assessment that included this particular weather event would reveal whether or not the other generation and storage in the system could meet demand or whether significant curtailment was necessary. Although repetition of something like the February 2021 weather might be a low probability event, its severity would lead to significant hardship. That is, the most pressing question for future adequacy will increasingly be whether there is enough energy over multiple hours to days to cover an event similar to the February 2021 ERCOT event without significant curtailment. This energy would come from a combination of renewable production, available fuel at thermal generators, and from other storage such as batteries. From this perspective, the recent discussions in ERCOT around new mechanisms for ensuring adequate generation capacity seem beside the point: these discussions all build on an anlysis that did not consider the 2021 weather conditions.
The ERCOT market has recently added a “Firm Fuel Supply Service.” Requirements for onsite fuel storage for thermal generators is a tacit recognition that power capability is insufficient to evaluate resource adequacy when stressed conditions may extend over multiple contiguous hours or days. The issue of supplying energy over multiple contiguous hours or days will become increasingly significant at higher levels of renewables.
So what should be used to assess resource adequacy moving forward in ERCOT? I believe that we will need to move toward the sort of evaluations that have been used in hydro dominated systems such as Brazil, Chile, New Zealand, and Tasmania in Australia. Such studies do not focus on a particular hour or a few hours. Rather, they consider the various scenarios of renewable availability and storage over extended times. That is, they consider the probability distribution of energy capacity in fuels and storage to assess whether or not the load energy can be met at the required minimum acceptable probability.
Seasonal water inflow and water storage in Brazil, Chile, New Zealand, and Tasmania dictate that the timescales for assessment in these countries are on the order of months to years. ERCOT has very little hydro and no pumped storage hydro. Therefore, the timescales relevant to ERCOT are shorter due to the need to compensate for daily or weekly renewable fluctuations instead of seasonal water inflow. Nevertheless, ERCOT will need to consider scenarios of load and of generation that unfold over many hours to days.
This type of analysis is familiar to operators of hydro-dominated systems but full assessment may require further evolution of assessment tools for ERCOT, or at the very least require consideration of extreme weather events within existing ELCC tools. We need to make these informed assessments before we have another curtailment event like February 2021. The reforms of the ERCOT market that are being discussed currently should be thoroughly analyzed to determine if they result in appropriate levels of resource adequacy considering the emerging energy dimension of resource adequacy.
I am pleased to report that my last post (“Renewables: it takes a portfolio”) received the most comments ever! In that post I discussed how to think about constructing a least-cost portfolio of thermal generation, storage, and demand response to complement renewables. In this followup, I would like to respond to attorney Dan Watkiss, who commented that “your economic analysis fails to account for environmental and social externalities — you’re not alone in this very serious flaw in current energy economic analysis.” He’s right. We need to include the environmental and social costs of energy production. What I’d like to do here is to expand my previous energy economic analysis to include environmental externalities. How can we account for the environmental cost of carbon dioxide emissions in designing a least-cost portfolio? I am now expanding “cost” to mean not just capital and operating costs but also the cost to our environment.
There is a simple fix, and it’s not new. It’s the idea of a price on carbon. As many economists have noted (see, for example, the extensive discussion by Professor Robert Stavins of Harvard), the most straightforward approach to pricing carbon is either a carbon tax or a cap-and-trade mechanism. In either case, a price is charged to the carbon emitter for the negative impact on the environment. Many countries now have a price on carbon, and Australia had a price on carbon from 2012 to 2014. In the case of a carbon tax, if we can estimate the marginal environmental cost of carbon dioxide emissions, then we can set a price on carbon emissions equal to this marginal environmental cost. In the short term, this forces fossil generators to pay the costs of polluting the environment. The higher their operating cost, the less competitive they are in the electricity market, the less electricity they’ll sell, and the less profit they will earn. In the long term, generators will be incentivized to invest in carbon-reduction technology, such as carbon capture and sequestration, or exit the industry.
Let’s more carefully compare the incentives provided by a price on carbon emissions versus not pricing emissions. First, on the demand side: without a price on carbon that reflects the underlying cost to the environment, we will tend to consume too many carbon intensive resources. Professors Severin Borenstein of the University of California, Berkeley, and Jim Bushnell of the University of California, Davis, have found that volumetric charges (per kWh prices) to consumers to recover the fixed costs of various policy measures will sometimes result in retail prices that are too high in US states such as California, but which are too low in several other US states. In locations where retail prices are too low to fully reflect the social cost of emissions, consumption can be expected to exceed socially optimal levels: people are using too much electricity compared to other fuels. For example, if retail gas and electric prices do not correctly reflect emissions costs, then shifting from gas to electric heating in a coal-dominated electricity system might result in higher emissions, and higher overall costs considering capital, operating, and emissions costs than if retail prices correctly reflected the social cost of emissions.
On the supply side, the lack of a price on carbon will tend to result in an inefficient mix of generation: too much generation from resources that emit carbon dioxide, and too much capacity of such high emission resources. Again, this means that the overall costs including capital, operating, and emissions will be higher than they should be. For a concrete example of the implications for generation dispatch, see exercise 7.2 in my “Locational Marginal Pricing” course (from slide 74 onward). With a price on carbon, market forces will tend to bring the industry toward the goal of minimizing overall capital, operating, and environmental costs.
However, the political reality is that governments are loathe to impose a price on carbon, particularly where the fossil fuel industry is influential. In Australia, for example, the Federal Liberal government’s net-zero mantra is “technology not taxes.” It is betting that as-yet-unknown advances in technology will get the country to net-zero by 2050, without any need for a price on carbon. And, perhaps even more importantly, without hurting the influential coal industry. Does this “technology not taxes” strategy, repeated by representatives of the Australian government, including Prime Minister Scott Morrison, stand up to scrutiny? Will it get Australia to net-zero without a price on carbon dioxide emissions?
The short answer is no. For one thing, the technologies will not be deployed without governmental incentives to build the technology or disincentives against emissions. It’s common sense—why would anyone invest any money in new technology that comes without any added profit unless they are forced to? (For a smart analysis of all the reasons why “technology without taxes” is unreasonable, see these articles by the Sydney Morning Herald’s Economics Editor Ross Gittins: Praying, Net zero, Masterpiece.)
Are there alternatives to a price on carbon to reduce emissions? Given the political difficulties of pricing carbon dioxide emissions in both Australia and the US, could a subsidy, such as the US Production Tax Credits (PTCs), do the job instead? Although PTCs and other subsidies worldwide have helped to spur technology that has reduced the costs of building renewables, subsidies simply cannot get the same results as a price on carbon. There are at least two reasons. First, electricity prices end up being too low, thus incentivizing greater consumption. This may also result in too little generation capacity overall, leading to supply adequacy problems. Second, the price advantage of the subsidy to renewables does not differentiate between the emissions levels of the other resources, causing the wrong mix of supply-side thermal resources. For example, coal and combined cycle gas generation see the same differential price effect given a subsidy on renewables, even though coal generation emits approximately twice as much carbon dioxide as combined cycle gas generation. This gives a relative advantage to coal compared to gas when looked at from the perspective of total capital, operational, and environmental costs. With the same differential price due to the renewable subsidies, there will be too much coal generation relative to natural gas generation.
Now let’s look at carbon sequestration, one of the technologies that will likely be needed to get us to net zero. The Liberal Australian government speaks as if the technology for carbon capture and sequestration is so cheap, or even free, that no governmental mandate or regulation is required to see it implemented. But, in fact, there are significant capital and operation costs for sequestration. Without a price on carbon, and without a regulatory mandate or subsidies, what would motivate a coal or gas fired power station to invest large sums to capture and sequester its carbon dioxide emissions? It is hard to see why any company would do so. Without a price on carbon, carbon capture and sequestration would require mandates or subsidies, or both, because its capital and operating expenses are very high.
Is there a drawback of such mandates and subsidies? Targeting mandates and subsidies to particular market segments will likely mean that cheaper options to decarbonize are overlooked. That is, when any government picks winners and losers, as it does when instituting mandates or offering subsidies, it likely makes the overall costs higher than they need to be. “Technology not taxes” will not result in minimizing overall capital, operating, and environmental costs.
Although the public discussion in Australia is not clear, subsidies to particular segments of the economy appear to be central to the Australian government’s decarbonization plans (see “Morrison’s Tricky Deal” and “Barnaby’s Billions”). Subsidies to one industry must be paid for somehow. Where does the money come from? Subsidies must be funded out of taxes on other parts of the economy. So, the Australian government’s plan would be more properly described as “technology and taxes to fund subsidies.” So, not only does “technology not taxes” not bring us toward minimizing overall costs, it also actually involves increased taxes.
A significant argument against pricing carbon is its disproportionate impact on low- and middle-income earners. How much more will low- and middle-income earners pay with a price on carbon than without? Will this affect their income negatively? Greenhouse emissions vary significantly by country and by person, and the accounting is complicated by the effect of carbon dioxide compared to other greenhouse gases, but we might estimate an average on the order of about 15 tonnes of carbon dioxide equivalent per person per annum in Australia and the US. The marginal environmental cost of carbon dioxide emissions is contentious, but let’s consider an indicative value of US$50 per tonne. Charging for carbon dioxide emissions at this price would cost each person an average US$750. If this was in the form of a carbon tax, then the money would add to other taxes paid to the government.
How does that stack up compared to taxes and subsidies in typical income tax filings? As an example, the US “Earned Income Tax Credit” provides a per capita subsidy that ranges from around US$1500 to US$6700 per year per taxpayer for low- and moderate-income workers. To compensate low- and moderate-income earners for their roughly US$750 annual payments for carbon, we could increase their Earned Income Tax Credit by that amount. Analogous adjustments could be made in Australia and other countries.
I have discussed taxes, but what about technology? The “technology not taxes” mantra is half right. Technology is a necessary driver of carbon reduction, and we need to be aggressive about developing new low emissions technologies and improving energy efficiency. Moreover, some limited subsidies for early-stage technologies can be a great investment if they catalyze cost reductions for subsequent large-scale deployment. For example, the early effect of PTCs and Investment Tax Credits (ITCs) in the US, and other mechanisms such as feed-in tariffs elsewhere, have helped with research and development of renewables and with scaling up the renewables industry, contributing to the astonishing reductions in fabrication costs. This early-stage investment has helped to bring us to the point where the unsubsidized cost of new renewable electricity is now cheaper than fossil electricity.
So Dan, in summary: we can account for environmental externalities in economic analyses. To do this we need a carbon price. If we can achieve that, then we can align everyone’s incentives toward decarbonization without a priori favoring one technology or another, or one industry over another. We need technology and a price on carbon.
Renewables are often touted as being cheaper than fossil generation. Certainly true when we have wind or sun. But when it is not sunny and not windy, we must, by definition, use a more expensive resource. So how do we make sure that the total cost of producing electricity is the lowest possible, considering both the capital (investment) cost and the operating cost, where the latter might include the “cost” of inconvenience to us as consumers of re-scheduling our consumption.
The biggest challenge, then, is to match supply and demand. We all know that renewable production depends on the ambient conditions, varies over time, and does not closely match typical patterns of consumption. This is famously reflected in the California duck curve. In a previous post, I suggested that residential pre-cooling in regions with significant air-conditioning load could match the daily variability of solar production to electricity consumption.
But the fluctuations are not just daily. Variability also presents problems at timescales from the very short-term (minute by minute) to the very long-term (seasonal and longer). Much of the variability has a random character, because of issues such as the weather’s effect on renewable production and electrical consumption. How, then, should we think about matching renewable supply and electrical demand, bearing in mind that we must balance production and consumption in the electricity system at all times?
A first observation is that a portfolio of diverse renewable resources will have lower variability than a single resource. Its mix of resources can also be chosen to best match consumption on average. But that would still leave a discrepancy between renewable supply and electrical demand.
There are many possible solutions for coping with the discrepancy between renewable supply and electrical demand. Generally, I have advocated for demand-side responses to match demand to supply. And now that the cost of chemical battery storage is getting cheaper, batteries also play an important role in aligning renewable production to consumer demand. Where available, hydroelectric resources are useful. (Bill Gross’s Idealab has developed another promising storage technology that lifts heavy weights to store energy.) And I can also see how thermal resources, used sparingly, would help in matching supply to demand, particularly to cope with solar and wind “droughts,” where there might be little wind and solar for several days.
What is the most cost-effective solution? What’s key, I think, is to understand the underlying cost structure of each proposed solution, at least in broad brush. And when considering cost structure, we need to take into account the distinction between the capital cost per kW of power capacity or per kWh of energy capacity, or both, and the operating cost per kWh.
At one extreme, we might consider a solution that is capital intensive, like batteries and some thermal generation, with a high cost per kW of power capacity or per kWh of energy capacity. Such assets are most economical when used very often. Battery storage used on a daily basis to provide ancillary services has been relatively profitable, meaning that the value it delivers can easily justify the expense of the battery. On the other hand, using a battery or a thermal generator only for a once-a-decade condition, such as the extreme weather in ERCOT in February 2021), is likely to be expensive, because the cost is expended for only one occurrence of benefit per decade. In other words, to be cost effective, we need to stack the benefits of batteries. So, if we can use batteries for multiple applications, utilization is overall high enough to justify the cost.
At the other extreme, we might consider solutions that are operational cost intensive, with relatively lower capital cost. They include peaking generation, consumer backup generators, and various forms of demand response that involve interrupting customer service. These solutions are only viable if used very occasionally and for short periods of time.
Some good news on the consumer backup generation front: the upcoming “vehicle-to-home” (V2H) technology, where an electric vehicle battery is harnessed in a microgrid with rooftop solar. Nissan Leaf already has V2H available and next year the Ford F150 Lightning will be available with V2H. V2H is best suited to only occasional use because, obviously, unlike a dedicated generator, you need your car to drive. However, in a winter storm such as Texas experienced in February 2021, I needed power at my house during the blackout and could not even drive on the snow-covered roads, an ideal application for V2H! Such solutions match the rare, but sometimes severe, occurrences of distribution failures and rolling blackouts, stacking this occasional back-up role on top of the daily benefits of having a car. In a highly renewable world, V2H is also a potential solution for renewable droughts.
In between those two extremes, we can also imagine other solutions. Battery technology is improving, pushing its applicability toward uses with lower utilization. For example, some level of battery storage may be appropriate for daily charge and discharge cycles even though it would be insufficient for an extended blackout or renewable drought. Some demand-side adaptations such as residential pre-cooling and industrial demand-response may also be viable on a regular, daily basis. They will not, however, provide for a multi-day event such as the 2021 storm.
If we design a portfolio of solutions with heterogeneous cost characteristics, some with high capital cost and low operating cost, others with low capital cost and high operating cost, and others in between, then adaptation to renewable fluctuations can be accomplished across multiple timescales. If we know the cost characteristics of each proposed item in the portfolio, a “screening curve” provides a good guide to the lowest-cost portfolio. (Tong Zhang has developed a simplified implementation of a screening curve analysis. This version does not consider storage, but it could be used to evaluate the least cost portfolio of generation resources to complement renewables.)
Yes, the least-cost portfolio will involve batteries, but batteries are not the full solution. And neither are demand-side adaptations. It takes a portfolio of resources to match supply and demand. This has always been the case for electricity, but the addition of renewables to supply requires that we need to change how we design the mix of resources.