The Wonderful Politics of Cap-and-Trade: A Closer Look at Waxman-Markey

The headline of this post is not meant to be ironic.   Despite all the hand-wringing in the press and the blogosphere about a political “give-away” of allowances for the cap-and-trade system in the Waxman-Markey bill voted out of committee last week, the politics of cap-and-trade systems are truly quite wonderful, which is why these systems have been used, and used successfully.

The Waxman-Markey allocation of allowances has its problems, which I will get to, but before noting those problems it is exceptionally important to keep in mind what is probably the key attribute of cap-and-trade systems:  the allocation of allowances – whether the allowances are auctioned or given out freely, and how they are freely allocated – has no impact on the equilibrium distribution of allowances (after trading), and therefore no impact on the allocation of emissions (or emissions abatement), the total magnitude of emissions, or the aggregate social costs.  (Well, there are some relatively minor, but significant caveats – those “problems” I mentioned — about which more below.)  By the way, this independence of a cap-and-trade system’s performance from the initial allowance allocation was established as far back as 1972 by David Montgomery in a path-breaking article in the Journal of Economic Theory (based upon his 1971 Harvard economics Ph.D. dissertation). It has been validated with empirical evidence repeatedly over the years.

Generally speaking, the choice between auctioning and freely allocating allowances does not influence firms’ production and emission reduction decisions.  Firms face the same emissions cost regardless of the allocation method.  When using an allowance, whether it was received for free or purchased, a firm loses the opportunity to sell that allowance, and thereby recognizes this “opportunity cost” in deciding whether to use the allowance.  Consequently, the allocation choice will not influence a cap’s overall costs.

Manifest political pressures lead to different initial allocations of allowances, which affect distribution, but not environmental effectiveness, and not cost-effectiveness.  This means that ordinary political pressures need not get in the way of developing and implementing a scientifically sound, economically rational, and politically pragmatic policy.  Contrast this with what would happen when political pressures are brought to bear on a carbon tax proposal, for example.  Here the result will most likely be exemptions of sectors and firms, which reduces environmental effectiveness and drives up costs (as some low-cost emission reduction opportunities are left off the table).  Furthermore, the hypothetical carbon tax example is the norm, not the exception.  Across the board, political pressures often reduce the effectiveness and increase the cost of well-intentioned public policies.  Cap-and-trade provides natural protection from this.  Distributional battles over the allowance allocation in a cap-and-trade system do not raise the overall cost of the program nor affect its environmental impacts.

In fact, the political process of states, districts, sectors, firms, and interest groups fighting for their share of the pie (free allowance allocations) serves as the mechanism whereby a political constituency in support of the system is developed, but without detrimental effects to the system’s environmental or economic performance.  That’s the good news, and it should never be forgotten.

But, depending upon the specific allocation mechanisms employed, there are several ways that the choice to freely distribute allowances can affect a system’s cost.  Here’s where the “caveats” and “problems” come in.

First, auction revenue may be used in ways that reduce the costs of the existing tax system or fund other socially beneficial policies.  Free allocations to the private sector forego such opportunities.  Below I will estimate the actual share of allowance value that accrues to the private sector.

Second, some proposals to freely allocate allowances to electric utilities may affect electricity prices, and thereby affect the extent to which reduced electricity demand contributes to limiting emissions cost-effectively.  Waxman-Markey allocates allowances to local distribution companies, which are subject to cost-of-service regulation even in regions with restructured wholesale electricity markets.  So, electricity prices would likely be affected by these allocations under existing state regulatory regimes.  The Waxman-Markey legislation seeks to address this problem by specifying that the economic value of the allowances given to electricity and natural gas local distribution companies should be passed on to consumers through lump-sum rebates, not through a reduction in electricity rates, thereby compensating consumers for increases in electricity prices, but without reducing incentives for energy conservation.

Third, and of most concern in the context of the Waxman-Markey legislation, “output-based updating allocations” provide perverse incentives and drive up costs of achieving a cap.  This merits some explanation.  If allowances are freely allocated, the allocation should be on the basis of some historical measures, such as output or emissions in a (previous) base year, not on the basis of measures which firms can affect, such as output or emissions in the current year.  Updating allocations, which involve periodically adjusting allocations over time to reflect changes in firms’ operations, contrast with this.

An output-based updating allocation ties the quantity of allowances that a firm receives to its output (production).  Such an allocation is essentially a production subsidy.  This distorts firms’ pricing and production decisions in ways that can introduce unintended consequences and may significantly increase the cost of meeting an emissions target.  Updating therefore has the potential to create perverse, undesirable incentives.

In Waxman-Markey, updating allocations are used for specific sectors with high CO2 emissions intensity and unusual sensitivity to international competition, in an effort to preserve international competitiveness and reduce emissions leakage.  It’s an open question whether this approach is superior to an import allowance requirement, whereby imports of a small set of specific commodities must carry with them CO2 allowances.  The problem with import allowance requirements is that they can damage international trade relations.  The only real solution to the competitiveness issue is to bring non-participating countries within an international climate regime in meaningful ways.  (On this, please see the work of the Harvard Project on International Climate Agreements.)

Also, output-based allocations are used in Waxman-Markey for merchant coal generators, thereby discouraging reductions in coal-fired electricity generation, another significant and costly distortion.

Now, let’s go back to the hand-wringing in the press and blogosphere about the so-called massive political “give-away” of allowances.  Perhaps unintentionally, there has been some misleading press coverage, suggesting that up to 75% or 80% of the allowances are given away to private industry as a windfall over the life of the program, 2012-2050 (in contrast with the 100% auction originally favored by President Obama).

Given the nature of the allowance allocation in the Waxman-Markey legislation, the best way to assess its implications is not as “free allocation” versus “auction,” but rather in terms of who is the ultimate beneficiary of each element of the allocation and auction, that is, how the value of the allowances is allocated.  On closer inspection, it turns out that many of the elements of the apparently free allocation accrue to consumers and public purposes, not private industry.

First of all, let’s looks at the elements which will accrue to consumers and public purposes.  Next to each allocation element is the respective share of allowances over the period 2012-2050 (measured as share of the cap, after the removal – sale — of allowances to private industry from a “strategic reserve,” which functions as a cost-containment measure.):

a.  Electricity and natural gas local distribution companies (22.2%), minus share (6%) that benefits industry as consumers of electricity (note:  there is a consequent 3% reduction in the allocation to energy-intensive trade-exposed industries, below, which is then dedicated to broad-based consumer rebates, below), 22.2 – 6 = 16.2%

b.  Home heating oil/propane, 0.9%

c.  Protection for low- and moderate-income households, 15.0%

d.  Worker assistance and job training, 0.8%

e.  States for renewable energy, efficiency, and building codes, 5.8%

f.   Clean energy innovation centers, 1.0%

g.  International deforestation, clean technology, and adaptation, 8.7%

h.  Domestic adaptation, 5.0%

The following elements will accrue to private industry, again with average (2012-2050) shares of allowances:

i.   Merchant coal generators, 3.0%

j.   Energy-intensive, trade-exposed industries (minus reduction in allocation due to EITE benefits from LDC allocation above) 8.0% – 3% = 5%

k.  Carbon-capture and storage incentives, 4.1%

l.   Clean vehicle technology standards, 1.0%

m. Oil refiners, 1.0%

n.  Net benefits to industry as consumers of lower-priced electricity from allocation to LDCs, 6.0%

The split over the entire period from 2012 to 2050 is 53.4% for consumers and public purposes, and 20.1% for private industry.  This 20% is drastically different from the suggestions that 70%, 80%, or more of the allowances will be given freely to private industry in a “massive corporate give-away.”

All categories – (a) through (n), above – sum to 73.5% of the total quantity of allowances over the period 2012-2050.  The remaining allowances — 26.5% over 2012 to 2050 — are scheduled in Waxman-Markey to be used almost entirely for consumer rebates, with the share of available allowances for this purpose rising from approximately 10% in 2025 to more than 50% by 2050.  Thus, the totals become 79.9% for consumers and public purposes versus 20.1% for private industry, or approximately 80% versus 20% — the opposite of the “80% free allowance corporate give-away” featured in many press and blogosphere accounts.  Moreover, because some of the allocations to private industry are – for better or for worse – conditional on recipients undertaking specific costly investments, such as investments in carbon capture and storage, part of the 20% free allocation to private industry should not be viewed as a windfall.

Speaking of the conditional allocations, I should also note that some observers (who are skeptical about government programs) may reasonably question some of the dedicated public purposes of the allowance distribution, but such questioning is equivalent to questioning dedicated uses of auction revenues.  The fundamental reality remains:  the appropriate characterization of the Waxman-Markey allocation is that 80% of the value of allowances go to consumers and public purposes, and 20% to private industry.

Finally, it should be noted that this 80-20 split is roughly consistent with empirical economic analyses of the share that would be required – on average — to fully compensate (but no more) private industry for equity losses due to the policy’s implementation.  In a series of analyses that considered the share of allowances that would be required in perpetuity for full compensation, Bovenberg and Goulder (2003) found that 13 percent would be sufficient for compensation of the fossil fuel extraction sectors, and Smith, Ross, and Montgomery (2002) found that 21 percent would be needed to compensate primary energy producers and electricity generators.

In my work for the Hamilton Project in 2007, I recommended beginning with a 50-50 auction-free-allocation split, moving to 100% auction over 25 years, because that time-path of numerical division between the share of allowances that is freely allocated to regulated firms and the share that is auctioned is equivalent (in terms of present discounted value) to perpetual allocations of 15 percent, 19 percent, and 22 percent, at real interest rates of 3, 4, and 5 percent, respectively.  My recommended allocation was designed to be consistent with the principal of targeting free allocations to burdened sectors in proportion to their relative burdens, while being politically pragmatic with more generous allocations in the early years of the program.

So, the Waxman-Markey 80/20 allowance split turns out to be consistent  — on average, i.e. economy-wide — with independent economic analysis of the share that would be required to fully compensate (but no more) the private sector for equity losses due to the imposition of the cap, and consistent with my Hamilton Project recommendation of a 50/50 split phased out to 100% auction over 25 years.

Going forward, many observers and participants in the policy process may continue to question the wisdom of some elements of the Waxman-Markey allowance allocation.  There’s nothing wrong with that.

But let’s be clear that, first, for the most part, the allocation of allowances affects neither the environmental performance of the cap-and-trade system nor its aggregate social cost.

Second, questioning should continue about the output-based allocation elements, because of the perverse incentives they put in place.

Third, we should be honest that the legislation, for all its flaws, is by no means the “massive corporate give-away” that it has been labeled.  On the contrary, 80% of the value of allowances accrue to consumers and public purposes, and some 20% accrue to covered, private industry.  This split is roughly consistent with the recommendations of independent economic research.

Fourth and finally, it should not be forgotten that the much-lamented deal-making that took place in the House committee last week for shares of the allowances for various purposes was a good example of the useful, important, and fundamentally benign mechanism through which a cap-and-trade system provides the means for a political constituency of support and action to be assembled (without reducing the policy’s effectiveness or driving up its cost).

Although there has surely been some insightful press coverage and intelligent public debate (including in the blogosphere) about the pros and cons of cap-and-trade, the Waxman-Markey legislation, and many of its design elements, it is remarkable (and unfortunate) how misleading so much of the coverage has been of the issues and the numbers surrounding the proposed allowance allocation.

Share

Does economic analysis shortchange the future?

Decisions made today usually have impacts both now and in the future. In the environmental realm, many of the future impacts are benefits, and such future benefits — as well as costs — are typically discounted by economists in their analyses.  Why do economists do this, and does it give insufficient weight to future benefits and thus to the well-being of future generations?

This is a question my colleague, Lawrence Goulder, a professor of economics at Stanford University, and I addressed in an article in Nature.  We noted that as economists, we often encounter skepticism about discounting, especially from non-economists. Some of the skepticism seems quite valid, yet some reflects misconceptions about the nature and purposes of discounting.  In this post, I hope to clarify the concept and the practice.

It helps to begin with the use of discounting in private investments, where the rationale stems from the fact that capital is productive ­– money earns interest.  Consider a company trying to decide whether to invest $1 million in the purchase of a copper mine, and suppose that the most profitable strategy involves extracting the available copper 3 years from now, yielding revenues (net of extraction costs) of $1,150,000. Would investing in this mine make sense?  Assume the company has the alternative of putting the $1 million in the bank at 5 per cent annual interest. Then, on a purely financial basis, the company would do better by putting the money in the bank, as it will have $1,000,000 x (1.05)3, or $1,157,625, that is, $7,625 more than it would earn from the copper mine investment.

I compared the alternatives by compounding to the future the up-front cost of the project. It is mathematically equivalent to compare the options by discounting to the present the future revenues or benefits from the copper mine. The discounted revenue is $1,150,000 divided by (1.05)3, or $993,413, which is less than the cost of the investment ($1 million).  So the project would not earn as much as the alternative of putting the money in the bank.

Discounting translates future dollars into equivalent current dollars; it undoes the effects of compound interest. It is not aimed at accounting for inflation, as even if there were no inflation, it would still be necessary to discount future revenues to account for the fact that a dollar today translates (via compound interest) into more dollars in the future.

Can this same kind of thinking be applied to investments made by the public sector?  Since my purpose is to clarify a few key issues in the starkest terms, I will use a highly stylized example that abstracts from many of the subtleties.  Suppose that a policy, if introduced today and maintained, would avoid significant damage to the environment and human welfare 100 years from now. The ‘return on investment’ is avoided future damages to the environment and people’s well-being. Suppose that this policy costs $4 billion to implement, and that this cost is completely borne today.  It is anticipated that the benefits – avoided damages to the environment – will be worth $800 billion to people alive 100 years from now.  Should the policy be implemented?

If we adopt the economic efficiency criterion I have described in previous posts, the question becomes whether the future benefits are large enough so that the winners could potentially compensate the losers and still be no worse off?  Here discounting is helpful. If, over the next 100 years, the average rate of interest on ordinary investments is 5 per cent, the gains of $800 billion to people 100 years from now are equivalent to $6.08 billion today.  Equivalently, $6.08 billion today, compounded at an annual interest rate of 5 per cent, will become $800 billion in 100 years. The project satisfies the principle of efficiency if it costs current generations less than $6.08 billion, otherwise not.

Since the $4 billion of up-front costs are less than $6.08 billion, the benefits to future generations are more than enough to offset the costs to current generations. Discounting serves the purpose of converting costs and benefits from various periods into equivalent dollars of some given period.  Applying a discount rate is not giving less weight to future generations’ welfare.  Rather, it is simply converting the (full) impacts that occur at different points of time into common units.

Much skepticism about discounting and, more broadly, the use of benefit-cost analysis, is connected to uncertainties in estimating future impacts. Consider the difficulties of ascertaining, for example, the benefits that future generations would enjoy from a regulation that protects certain endangered species. Some of the gain to future generations might come in the form of pharmaceutical products derived from the protected species. Such benefits are impossible to predict. Benefits also depend on the values future generations would attach to the protected species – the enjoyment of observing them in the wild or just knowing of their existence. But how can we predict future generations’ values?  Economists and other social scientists try to infer them through surveys and by inferring preferences from individuals’ behavior.  But these approaches are far from perfect, and at best they indicate only the values or tastes of people alive today.

The uncertainties are substantial and unavoidable, but they do not invalidate the use of discounting (or benefit-cost analysis).  They do oblige analysts, however, to assess and acknowledge those uncertainties in their policy assessments, a topic I discussed in my last post (“What Baseball Can Teach Policymakers”), and a topic to which I will return in the future.

Share

What Baseball Can Teach Policymakers

With the Major League Baseball season having just begun, I’m reminded of the truism that the best teams win their divisions in the regular season, but the hot teams win in the post-season playoffs.  Why the difference?  The regular season is 162 games long, but the post-season consists of just a few brief 5-game and 7-game series.  And because of the huge random element that pervades the sport, in a single game (or a short series), the best teams often lose, and the worst teams often win.

The numbers are striking, and bear repeating.  In a typical year, the best teams lose 40 percent of their games, and the worst teams win 40 percent of theirs.  In the extreme, one of the best Major League Baseball teams ever ­- the 1927 New York Yankees – lost 29 percent of their games; and one of the worst teams in history – the 1962 New York Mets – won 25 percent of theirs.  On any given day, anything can happen.  Uncertainty is a fundamental part of the game, and any analysis that fails to recognize this is not only incomplete, but fundamentally flawed.

The same is true of analyses of environmental policies.  Uncertainty is an absolutely fundamental aspect of environmental problems and the policies that are employed to address those problems.  Any analysis that fails to recognize this runs the risk not only of being incomplete, but misleading as well.  Judson Jaffe, formerly at Analysis Group, and I documented this in a study published in Regulation and Governance.

To estimate proposed regulations’ benefits and costs, analysts frequently rely on inputs that are uncertain —  sometimes substantially so.  Such uncertainties in underlying inputs are propagated through analyses, leading to uncertainty in ultimate benefit and cost estimates, which constitute the core of a Regulatory Impact Analysis (RIA), required by Presidential Executive Order for all “economically significant” proposed Federal regulations.

Despite this uncertainty, the most prominently displayed results in RIAs are typically single, apparently precise point estimates of benefits, costs, and net benefits (benefits minus costs), masking uncertainties inherent in their calculation and possibly obscuring tradeoffs among competing policy options.  Historically, efforts to address uncertainty in RIAs have been very limited, but guidance set forth in the U.S. Office of Management and Budget’s (OMB) Circular A‑4 on Regulatory Analysis has the potential to enhance the information provided in RIAs regarding uncertainty in benefit and cost estimates.  Circular A‑4 requires the development of a formal quantitative assessment of uncertainty regarding a regulation’s economic impact if either annual benefits or costs are expected to reach $1 billion.

Over the years, formal quantitative uncertainty assessments — known as Monte Carlo analyses — have become common in a variety of fields, including engineering, finance, and a number of scientific disciplines, as well as in “sabermetrics” (quantitative, especially statistical analysis of professional baseball), but rarely have such methods been employed in RIAs.

The first step in a Monte Carlo analysis involves the development of probability distributions of uncertain inputs to an analysis.  These probability distributions reflect the implications of uncertainty regarding an input for the range of its possible values and the likelihood that each value is the true value.  Once probability distributions of inputs to a benefit‑cost analysis are established, a Monte Carlo analysis is used to simulate the probability distribution of the regulation’s net benefits by carrying out the calculation of benefits and costs thousands, or even millions, of times.  With each iteration of the calculations, new values are randomly drawn from each input’s probability distribution and used in the benefit and/or cost calculations.  Over the course of these iterations, the frequency with which any given value is drawn for a particular input is governed by that input’s probability distribution.  Importantly, any correlations among individual items in the benefit and cost calculations are taken into account.  The resulting set of net benefit estimates characterizes the complete probability distribution of net benefits.

Uncertainty is inevitable in estimates of environmental regulations’ economic impacts, and assessments of the extent and nature of such uncertainty provides important information for policymakers evaluating proposed regulations.  Such information offers a context for interpreting benefit and cost estimates, and can lead to point estimates of regulations= benefits and costs that differ from what would be produced by purely deterministic analyses (that ignore uncertainty).  In addition, these assessments can help establish priorities for research.

Due to the complexity of interactions among uncertainties in inputs to RIAs, an accurate assessment of uncertainty can be gained only through the use of formal quantitative methods, such as Monte Carlo analysis.  Although these methods can offer significant insights, they require only limited additional effort relative to that already expended on RIAs.  Much of the data required for these analyses are already obtained by EPA in their preparation of RIAs; and widely available software allows the execution of Monte Carlo analysis in common spreadsheet programs on a desktop computer.  In a specific application in the Regulation and Governance study, Jaffe and I demonstrate the use and advantages of employing formal quantitative analysis of uncertainty in a review of EPA’s 2004 RIA for its Nonroad Diesel Rule.

Formal quantitative assessments of uncertainty can mark a truly significant step forward in enhancing regulatory analysis under Presidential Executive Orders.  They have the potential to improve substantially our understanding of the impact of environmental regulations, and thereby to lead to more informed policymaking.

Share