Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share

The Wonderful Politics of Cap-and-Trade: A Closer Look at Waxman-Markey

The headline of this post is not meant to be ironic.   Despite all the hand-wringing in the press and the blogosphere about a political “give-away” of allowances for the cap-and-trade system in the Waxman-Markey bill voted out of committee last week, the politics of cap-and-trade systems are truly quite wonderful, which is why these systems have been used, and used successfully.

The Waxman-Markey allocation of allowances has its problems, which I will get to, but before noting those problems it is exceptionally important to keep in mind what is probably the key attribute of cap-and-trade systems:  the allocation of allowances – whether the allowances are auctioned or given out freely, and how they are freely allocated – has no impact on the equilibrium distribution of allowances (after trading), and therefore no impact on the allocation of emissions (or emissions abatement), the total magnitude of emissions, or the aggregate social costs.  (Well, there are some relatively minor, but significant caveats – those “problems” I mentioned — about which more below.)  By the way, this independence of a cap-and-trade system’s performance from the initial allowance allocation was established as far back as 1972 by David Montgomery in a path-breaking article in the Journal of Economic Theory (based upon his 1971 Harvard economics Ph.D. dissertation). It has been validated with empirical evidence repeatedly over the years.

Generally speaking, the choice between auctioning and freely allocating allowances does not influence firms’ production and emission reduction decisions.  Firms face the same emissions cost regardless of the allocation method.  When using an allowance, whether it was received for free or purchased, a firm loses the opportunity to sell that allowance, and thereby recognizes this “opportunity cost” in deciding whether to use the allowance.  Consequently, the allocation choice will not influence a cap’s overall costs.

Manifest political pressures lead to different initial allocations of allowances, which affect distribution, but not environmental effectiveness, and not cost-effectiveness.  This means that ordinary political pressures need not get in the way of developing and implementing a scientifically sound, economically rational, and politically pragmatic policy.  Contrast this with what would happen when political pressures are brought to bear on a carbon tax proposal, for example.  Here the result will most likely be exemptions of sectors and firms, which reduces environmental effectiveness and drives up costs (as some low-cost emission reduction opportunities are left off the table).  Furthermore, the hypothetical carbon tax example is the norm, not the exception.  Across the board, political pressures often reduce the effectiveness and increase the cost of well-intentioned public policies.  Cap-and-trade provides natural protection from this.  Distributional battles over the allowance allocation in a cap-and-trade system do not raise the overall cost of the program nor affect its environmental impacts.

In fact, the political process of states, districts, sectors, firms, and interest groups fighting for their share of the pie (free allowance allocations) serves as the mechanism whereby a political constituency in support of the system is developed, but without detrimental effects to the system’s environmental or economic performance.  That’s the good news, and it should never be forgotten.

But, depending upon the specific allocation mechanisms employed, there are several ways that the choice to freely distribute allowances can affect a system’s cost.  Here’s where the “caveats” and “problems” come in.

First, auction revenue may be used in ways that reduce the costs of the existing tax system or fund other socially beneficial policies.  Free allocations to the private sector forego such opportunities.  Below I will estimate the actual share of allowance value that accrues to the private sector.

Second, some proposals to freely allocate allowances to electric utilities may affect electricity prices, and thereby affect the extent to which reduced electricity demand contributes to limiting emissions cost-effectively.  Waxman-Markey allocates allowances to local distribution companies, which are subject to cost-of-service regulation even in regions with restructured wholesale electricity markets.  So, electricity prices would likely be affected by these allocations under existing state regulatory regimes.  The Waxman-Markey legislation seeks to address this problem by specifying that the economic value of the allowances given to electricity and natural gas local distribution companies should be passed on to consumers through lump-sum rebates, not through a reduction in electricity rates, thereby compensating consumers for increases in electricity prices, but without reducing incentives for energy conservation.

Third, and of most concern in the context of the Waxman-Markey legislation, “output-based updating allocations” provide perverse incentives and drive up costs of achieving a cap.  This merits some explanation.  If allowances are freely allocated, the allocation should be on the basis of some historical measures, such as output or emissions in a (previous) base year, not on the basis of measures which firms can affect, such as output or emissions in the current year.  Updating allocations, which involve periodically adjusting allocations over time to reflect changes in firms’ operations, contrast with this.

An output-based updating allocation ties the quantity of allowances that a firm receives to its output (production).  Such an allocation is essentially a production subsidy.  This distorts firms’ pricing and production decisions in ways that can introduce unintended consequences and may significantly increase the cost of meeting an emissions target.  Updating therefore has the potential to create perverse, undesirable incentives.

In Waxman-Markey, updating allocations are used for specific sectors with high CO2 emissions intensity and unusual sensitivity to international competition, in an effort to preserve international competitiveness and reduce emissions leakage.  It’s an open question whether this approach is superior to an import allowance requirement, whereby imports of a small set of specific commodities must carry with them CO2 allowances.  The problem with import allowance requirements is that they can damage international trade relations.  The only real solution to the competitiveness issue is to bring non-participating countries within an international climate regime in meaningful ways.  (On this, please see the work of the Harvard Project on International Climate Agreements.)

Also, output-based allocations are used in Waxman-Markey for merchant coal generators, thereby discouraging reductions in coal-fired electricity generation, another significant and costly distortion.

Now, let’s go back to the hand-wringing in the press and blogosphere about the so-called massive political “give-away” of allowances.  Perhaps unintentionally, there has been some misleading press coverage, suggesting that up to 75% or 80% of the allowances are given away to private industry as a windfall over the life of the program, 2012-2050 (in contrast with the 100% auction originally favored by President Obama).

Given the nature of the allowance allocation in the Waxman-Markey legislation, the best way to assess its implications is not as “free allocation” versus “auction,” but rather in terms of who is the ultimate beneficiary of each element of the allocation and auction, that is, how the value of the allowances is allocated.  On closer inspection, it turns out that many of the elements of the apparently free allocation accrue to consumers and public purposes, not private industry.

First of all, let’s looks at the elements which will accrue to consumers and public purposes.  Next to each allocation element is the respective share of allowances over the period 2012-2050 (measured as share of the cap, after the removal – sale — of allowances to private industry from a “strategic reserve,” which functions as a cost-containment measure.):

a.  Electricity and natural gas local distribution companies (22.2%), minus share (6%) that benefits industry as consumers of electricity (note:  there is a consequent 3% reduction in the allocation to energy-intensive trade-exposed industries, below, which is then dedicated to broad-based consumer rebates, below), 22.2 – 6 = 16.2%

b.  Home heating oil/propane, 0.9%

c.  Protection for low- and moderate-income households, 15.0%

d.  Worker assistance and job training, 0.8%

e.  States for renewable energy, efficiency, and building codes, 5.8%

f.   Clean energy innovation centers, 1.0%

g.  International deforestation, clean technology, and adaptation, 8.7%

h.  Domestic adaptation, 5.0%

The following elements will accrue to private industry, again with average (2012-2050) shares of allowances:

i.   Merchant coal generators, 3.0%

j.   Energy-intensive, trade-exposed industries (minus reduction in allocation due to EITE benefits from LDC allocation above) 8.0% – 3% = 5%

k.  Carbon-capture and storage incentives, 4.1%

l.   Clean vehicle technology standards, 1.0%

m. Oil refiners, 1.0%

n.  Net benefits to industry as consumers of lower-priced electricity from allocation to LDCs, 6.0%

The split over the entire period from 2012 to 2050 is 53.4% for consumers and public purposes, and 20.1% for private industry.  This 20% is drastically different from the suggestions that 70%, 80%, or more of the allowances will be given freely to private industry in a “massive corporate give-away.”

All categories – (a) through (n), above – sum to 73.5% of the total quantity of allowances over the period 2012-2050.  The remaining allowances — 26.5% over 2012 to 2050 — are scheduled in Waxman-Markey to be used almost entirely for consumer rebates, with the share of available allowances for this purpose rising from approximately 10% in 2025 to more than 50% by 2050.  Thus, the totals become 79.9% for consumers and public purposes versus 20.1% for private industry, or approximately 80% versus 20% — the opposite of the “80% free allowance corporate give-away” featured in many press and blogosphere accounts.  Moreover, because some of the allocations to private industry are – for better or for worse – conditional on recipients undertaking specific costly investments, such as investments in carbon capture and storage, part of the 20% free allocation to private industry should not be viewed as a windfall.

Speaking of the conditional allocations, I should also note that some observers (who are skeptical about government programs) may reasonably question some of the dedicated public purposes of the allowance distribution, but such questioning is equivalent to questioning dedicated uses of auction revenues.  The fundamental reality remains:  the appropriate characterization of the Waxman-Markey allocation is that 80% of the value of allowances go to consumers and public purposes, and 20% to private industry.

Finally, it should be noted that this 80-20 split is roughly consistent with empirical economic analyses of the share that would be required – on average — to fully compensate (but no more) private industry for equity losses due to the policy’s implementation.  In a series of analyses that considered the share of allowances that would be required in perpetuity for full compensation, Bovenberg and Goulder (2003) found that 13 percent would be sufficient for compensation of the fossil fuel extraction sectors, and Smith, Ross, and Montgomery (2002) found that 21 percent would be needed to compensate primary energy producers and electricity generators.

In my work for the Hamilton Project in 2007, I recommended beginning with a 50-50 auction-free-allocation split, moving to 100% auction over 25 years, because that time-path of numerical division between the share of allowances that is freely allocated to regulated firms and the share that is auctioned is equivalent (in terms of present discounted value) to perpetual allocations of 15 percent, 19 percent, and 22 percent, at real interest rates of 3, 4, and 5 percent, respectively.  My recommended allocation was designed to be consistent with the principal of targeting free allocations to burdened sectors in proportion to their relative burdens, while being politically pragmatic with more generous allocations in the early years of the program.

So, the Waxman-Markey 80/20 allowance split turns out to be consistent  — on average, i.e. economy-wide — with independent economic analysis of the share that would be required to fully compensate (but no more) the private sector for equity losses due to the imposition of the cap, and consistent with my Hamilton Project recommendation of a 50/50 split phased out to 100% auction over 25 years.

Going forward, many observers and participants in the policy process may continue to question the wisdom of some elements of the Waxman-Markey allowance allocation.  There’s nothing wrong with that.

But let’s be clear that, first, for the most part, the allocation of allowances affects neither the environmental performance of the cap-and-trade system nor its aggregate social cost.

Second, questioning should continue about the output-based allocation elements, because of the perverse incentives they put in place.

Third, we should be honest that the legislation, for all its flaws, is by no means the “massive corporate give-away” that it has been labeled.  On the contrary, 80% of the value of allowances accrue to consumers and public purposes, and some 20% accrue to covered, private industry.  This split is roughly consistent with the recommendations of independent economic research.

Fourth and finally, it should not be forgotten that the much-lamented deal-making that took place in the House committee last week for shares of the allowances for various purposes was a good example of the useful, important, and fundamentally benign mechanism through which a cap-and-trade system provides the means for a political constituency of support and action to be assembled (without reducing the policy’s effectiveness or driving up its cost).

Although there has surely been some insightful press coverage and intelligent public debate (including in the blogosphere) about the pros and cons of cap-and-trade, the Waxman-Markey legislation, and many of its design elements, it is remarkable (and unfortunate) how misleading so much of the coverage has been of the issues and the numbers surrounding the proposed allowance allocation.

Share

Does economic analysis shortchange the future?

Decisions made today usually have impacts both now and in the future. In the environmental realm, many of the future impacts are benefits, and such future benefits — as well as costs — are typically discounted by economists in their analyses.  Why do economists do this, and does it give insufficient weight to future benefits and thus to the well-being of future generations?

This is a question my colleague, Lawrence Goulder, a professor of economics at Stanford University, and I addressed in an article in Nature.  We noted that as economists, we often encounter skepticism about discounting, especially from non-economists. Some of the skepticism seems quite valid, yet some reflects misconceptions about the nature and purposes of discounting.  In this post, I hope to clarify the concept and the practice.

It helps to begin with the use of discounting in private investments, where the rationale stems from the fact that capital is productive ­– money earns interest.  Consider a company trying to decide whether to invest $1 million in the purchase of a copper mine, and suppose that the most profitable strategy involves extracting the available copper 3 years from now, yielding revenues (net of extraction costs) of $1,150,000. Would investing in this mine make sense?  Assume the company has the alternative of putting the $1 million in the bank at 5 per cent annual interest. Then, on a purely financial basis, the company would do better by putting the money in the bank, as it will have $1,000,000 x (1.05)3, or $1,157,625, that is, $7,625 more than it would earn from the copper mine investment.

I compared the alternatives by compounding to the future the up-front cost of the project. It is mathematically equivalent to compare the options by discounting to the present the future revenues or benefits from the copper mine. The discounted revenue is $1,150,000 divided by (1.05)3, or $993,413, which is less than the cost of the investment ($1 million).  So the project would not earn as much as the alternative of putting the money in the bank.

Discounting translates future dollars into equivalent current dollars; it undoes the effects of compound interest. It is not aimed at accounting for inflation, as even if there were no inflation, it would still be necessary to discount future revenues to account for the fact that a dollar today translates (via compound interest) into more dollars in the future.

Can this same kind of thinking be applied to investments made by the public sector?  Since my purpose is to clarify a few key issues in the starkest terms, I will use a highly stylized example that abstracts from many of the subtleties.  Suppose that a policy, if introduced today and maintained, would avoid significant damage to the environment and human welfare 100 years from now. The ‘return on investment’ is avoided future damages to the environment and people’s well-being. Suppose that this policy costs $4 billion to implement, and that this cost is completely borne today.  It is anticipated that the benefits – avoided damages to the environment – will be worth $800 billion to people alive 100 years from now.  Should the policy be implemented?

If we adopt the economic efficiency criterion I have described in previous posts, the question becomes whether the future benefits are large enough so that the winners could potentially compensate the losers and still be no worse off?  Here discounting is helpful. If, over the next 100 years, the average rate of interest on ordinary investments is 5 per cent, the gains of $800 billion to people 100 years from now are equivalent to $6.08 billion today.  Equivalently, $6.08 billion today, compounded at an annual interest rate of 5 per cent, will become $800 billion in 100 years. The project satisfies the principle of efficiency if it costs current generations less than $6.08 billion, otherwise not.

Since the $4 billion of up-front costs are less than $6.08 billion, the benefits to future generations are more than enough to offset the costs to current generations. Discounting serves the purpose of converting costs and benefits from various periods into equivalent dollars of some given period.  Applying a discount rate is not giving less weight to future generations’ welfare.  Rather, it is simply converting the (full) impacts that occur at different points of time into common units.

Much skepticism about discounting and, more broadly, the use of benefit-cost analysis, is connected to uncertainties in estimating future impacts. Consider the difficulties of ascertaining, for example, the benefits that future generations would enjoy from a regulation that protects certain endangered species. Some of the gain to future generations might come in the form of pharmaceutical products derived from the protected species. Such benefits are impossible to predict. Benefits also depend on the values future generations would attach to the protected species – the enjoyment of observing them in the wild or just knowing of their existence. But how can we predict future generations’ values?  Economists and other social scientists try to infer them through surveys and by inferring preferences from individuals’ behavior.  But these approaches are far from perfect, and at best they indicate only the values or tastes of people alive today.

The uncertainties are substantial and unavoidable, but they do not invalidate the use of discounting (or benefit-cost analysis).  They do oblige analysts, however, to assess and acknowledge those uncertainties in their policy assessments, a topic I discussed in my last post (“What Baseball Can Teach Policymakers”), and a topic to which I will return in the future.

Share