Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).


Author: Robert Stavins

Robert N. Stavins is the A.J. Meyer Professor of Energy & Economic Development, John F. Kennedy School of Government, Harvard University, Director of the Harvard Environmental Economics Program, Director of Graduate Studies for the Doctoral Program in Public Policy and the Doctoral Program in Political Economy and Government, Co-Chair of the Harvard Business School-Kennedy School Joint Degree Programs, and Director of the Harvard Project on Climate Agreements.

2 thoughts on “Is Benefit-Cost Analysis Helpful for Environmental Regulation?”

  1. This posting highlights in italics that there are limits to the applicability of cost-benefit analysis, especially in circumstances where uncertainty is enormous. This is particularly true in the case of climate change. I post, here, some paragraphs of a paper that I prepared for the Pew Center on Global Climate Change. There, I try to deconstruct the fundamental conclusion of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change:

    Responding to climate change involves an iterative risk management process that includes both adaptation and mitigation, and takes into account climate change damages, co-benefits, sustainability, equity and attitudes to risk {IPCC (2007); SPM of the Synthesis Report, pg 22.

    The entire paper will be released soon, but the following thoughts are particularly germane to this posting:

    Beginning perhaps with Nordhaus (1991), the inter-temporal version of the standard benefit-cost paradigm has been the mainstay of economic analyses of climate policy (particularly on the mitigation side). In applying this approach, researchers track economic damages that would be associated with climate change and costs that would be associated by climate policy over time scales that extend many decades if not centuries into the future. They calibrate these damages and costs along scenarios of economic development and resource availability that display ever-growing ranges of possible futures. Both metrics are disaggregated to the extent possible across countries and regions, and both are discounted back to present values. In this final step, estimates of the present value of benefits and costs are highly sensitive to natural parameters (like climate sensitivity) and policy parameters (like the assumed discount rate which is, in turn, extremely sensitive to attitudes toward risk, attitudes toward inequity, and inter-temporal impatience).

    Many authors and commentators have become increasingly critical of this approach even as practitioners, instructed in the United States for example by Circular A-4, have come to recognize that some benefits (and even some costs) cannot be monetized. Practitioners have thereby opened the door for tracking benefits and costs in terms of alternative, non-economic metrics. Practitioners have also recognized problems with specifying appropriate discount rates, coping with enormous, pervasive and persistent uncertainty, and accommodating the profound distributional consequences of climate change.

    At the same time, pressure to assign numerical values to benefits of reducing greenhouse gas emissions has grown. The 9th Circuit ruling about proposed CAFÉ standards in early 2008 asked, for example, that benefit-cost analyses of stricter standards include estimates of associated climate change benefits. The ruling begged the question of how to include benefits that could not be monetized in the policy deliberations. Moreover, it was unclear about whether benefits drawn from reduced climate impacts that would be felt outside the boundaries of the United States could be considered. Executive Order 13497, signed by President Obama on January 30, 2009, opened the door for a review of these issues by directing the Director of the Office of Management and Budget to provide input, with advice from regulatory agencies, for a new Executive Order on Federal regulatory review. The issues are so complicated that the 100 day deadline for recommendations has proven to be completely infeasible in terms of provide support for such an Order; but it is clear the president has set in motion a process through which non-quantified costs and/or benefits calibrated in multiple metrics including reduced risk could become important for regulatory design and perhaps serve as the determining criteria as the future unfolds.

    As suggested by the IPCC (2007c), the risk-management approach to confronting climate change has emerged as an important analytic tool designed explicitly to ameliorate or at least account for many (but by no means all) of these thorny issues. Its most straightforward applications begin with the statistical definition of risk – the probability of an event multiplied by its consequence. In benefit-cost approaches, all consequences are calibrated directly as economic outcomes that are expressed in units of currency. In these applications, any dollar lost or gained in one possible outcome is worth the same as any other dollar lost or gained in any other outcome. It follows that decision-makers need only worry about expected outcomes regardless of how good or how bad any specific “not-implausible” extreme might be. Risk management approaches expand the range of analytic applicability by allowing consequences to be calibrated in terms of more general welfare metrics. These metrics may depend on the same outcomes as before, but they make it clear that one dollar in one possible outcome is not necessarily worth the same as a dollar in another possible outcome. Metrics that reflect aversion to risk, for example, hold that an extra dollar gained in a good outcome is worth less, in terms of welfare, than an extra dollar lost in a bad outcome. It follows that the extremes of possible outcomes matter in these cases; and it is in these contexts that people buy insurance and/or adopt hedging strategies. They do so because either approach increases expected welfare (computed over the welfare implications of the full range of possible outcomes) even though it reduces the expected value of the associated outcomes.

    The distinction between expected outcome and expected welfare is a fine and reasonable conclusion in the abstract, but what do we really know about how to apply all of this knowledge in the climate arena? According IPCC (2007), we know “unequivocally” that the planet is warming. We are now “virtually certain”, to use IPCC parlance, that the climate is changing at accelerating rates. We also know with “very high confidence” that anthropogenic emissions are the principal cause. We even have evidence from Stott, et al (2004) and the IPCC that anthropogenic climate change is the strongest contributor to the conditions that created the 2003 heat-wave across central Europe that caused tens of thousands of premature deaths. This knowledge alone is sufficient to establish the reality and seriousness of the issue. Even though substantial uncertainties persist about specific sources of risk, this knowledge is sufficient to establish the need to respond in the near-term in ways that will reduce future emissions and thereby ameliorate the pace of future change. Indeed, looking at uncertainty through a risk-management lens makes the case for near-term action through hedging against all sorts of climate risks – risks that can be denominated in terms of economic damages, of course, but also in terms of other indicators like billions of additional people facing hunger, water stress, or hazard from coastal storms. It then follows from simple economics that this near-term action should begin immediately if we are to minimize the expected cost of meeting any long-term objective.

    We even have an example of hedging against catastrophic risk for which likelihood cannot be assigned (except that it is not-zero) in the behaviour of the Greenspan FED, but that is another story.

  2. It’s not all about money, is it? The health and wealth of sovereign citizens should ideally be the yardstick by which things are measured, yet sadly it rarely is. Here in the UK the government seem intent on making short-term savings oblivious to the greater expenses their policies will create later on. It’s a bit like they don’t care or can’t understand. Perhaps the cost-benefits of central government itself should be weighed up; can we afford them at all?

Leave a Reply