Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share

Does economic analysis shortchange the future?

Decisions made today usually have impacts both now and in the future. In the environmental realm, many of the future impacts are benefits, and such future benefits — as well as costs — are typically discounted by economists in their analyses.  Why do economists do this, and does it give insufficient weight to future benefits and thus to the well-being of future generations?

This is a question my colleague, Lawrence Goulder, a professor of economics at Stanford University, and I addressed in an article in Nature.  We noted that as economists, we often encounter skepticism about discounting, especially from non-economists. Some of the skepticism seems quite valid, yet some reflects misconceptions about the nature and purposes of discounting.  In this post, I hope to clarify the concept and the practice.

It helps to begin with the use of discounting in private investments, where the rationale stems from the fact that capital is productive ­– money earns interest.  Consider a company trying to decide whether to invest $1 million in the purchase of a copper mine, and suppose that the most profitable strategy involves extracting the available copper 3 years from now, yielding revenues (net of extraction costs) of $1,150,000. Would investing in this mine make sense?  Assume the company has the alternative of putting the $1 million in the bank at 5 per cent annual interest. Then, on a purely financial basis, the company would do better by putting the money in the bank, as it will have $1,000,000 x (1.05)3, or $1,157,625, that is, $7,625 more than it would earn from the copper mine investment.

I compared the alternatives by compounding to the future the up-front cost of the project. It is mathematically equivalent to compare the options by discounting to the present the future revenues or benefits from the copper mine. The discounted revenue is $1,150,000 divided by (1.05)3, or $993,413, which is less than the cost of the investment ($1 million).  So the project would not earn as much as the alternative of putting the money in the bank.

Discounting translates future dollars into equivalent current dollars; it undoes the effects of compound interest. It is not aimed at accounting for inflation, as even if there were no inflation, it would still be necessary to discount future revenues to account for the fact that a dollar today translates (via compound interest) into more dollars in the future.

Can this same kind of thinking be applied to investments made by the public sector?  Since my purpose is to clarify a few key issues in the starkest terms, I will use a highly stylized example that abstracts from many of the subtleties.  Suppose that a policy, if introduced today and maintained, would avoid significant damage to the environment and human welfare 100 years from now. The ‘return on investment’ is avoided future damages to the environment and people’s well-being. Suppose that this policy costs $4 billion to implement, and that this cost is completely borne today.  It is anticipated that the benefits – avoided damages to the environment – will be worth $800 billion to people alive 100 years from now.  Should the policy be implemented?

If we adopt the economic efficiency criterion I have described in previous posts, the question becomes whether the future benefits are large enough so that the winners could potentially compensate the losers and still be no worse off?  Here discounting is helpful. If, over the next 100 years, the average rate of interest on ordinary investments is 5 per cent, the gains of $800 billion to people 100 years from now are equivalent to $6.08 billion today.  Equivalently, $6.08 billion today, compounded at an annual interest rate of 5 per cent, will become $800 billion in 100 years. The project satisfies the principle of efficiency if it costs current generations less than $6.08 billion, otherwise not.

Since the $4 billion of up-front costs are less than $6.08 billion, the benefits to future generations are more than enough to offset the costs to current generations. Discounting serves the purpose of converting costs and benefits from various periods into equivalent dollars of some given period.  Applying a discount rate is not giving less weight to future generations’ welfare.  Rather, it is simply converting the (full) impacts that occur at different points of time into common units.

Much skepticism about discounting and, more broadly, the use of benefit-cost analysis, is connected to uncertainties in estimating future impacts. Consider the difficulties of ascertaining, for example, the benefits that future generations would enjoy from a regulation that protects certain endangered species. Some of the gain to future generations might come in the form of pharmaceutical products derived from the protected species. Such benefits are impossible to predict. Benefits also depend on the values future generations would attach to the protected species – the enjoyment of observing them in the wild or just knowing of their existence. But how can we predict future generations’ values?  Economists and other social scientists try to infer them through surveys and by inferring preferences from individuals’ behavior.  But these approaches are far from perfect, and at best they indicate only the values or tastes of people alive today.

The uncertainties are substantial and unavoidable, but they do not invalidate the use of discounting (or benefit-cost analysis).  They do oblige analysts, however, to assess and acknowledge those uncertainties in their policy assessments, a topic I discussed in my last post (“What Baseball Can Teach Policymakers”), and a topic to which I will return in the future.

Share

What Baseball Can Teach Policymakers

With the Major League Baseball season having just begun, I’m reminded of the truism that the best teams win their divisions in the regular season, but the hot teams win in the post-season playoffs.  Why the difference?  The regular season is 162 games long, but the post-season consists of just a few brief 5-game and 7-game series.  And because of the huge random element that pervades the sport, in a single game (or a short series), the best teams often lose, and the worst teams often win.

The numbers are striking, and bear repeating.  In a typical year, the best teams lose 40 percent of their games, and the worst teams win 40 percent of theirs.  In the extreme, one of the best Major League Baseball teams ever ­- the 1927 New York Yankees – lost 29 percent of their games; and one of the worst teams in history – the 1962 New York Mets – won 25 percent of theirs.  On any given day, anything can happen.  Uncertainty is a fundamental part of the game, and any analysis that fails to recognize this is not only incomplete, but fundamentally flawed.

The same is true of analyses of environmental policies.  Uncertainty is an absolutely fundamental aspect of environmental problems and the policies that are employed to address those problems.  Any analysis that fails to recognize this runs the risk not only of being incomplete, but misleading as well.  Judson Jaffe, formerly at Analysis Group, and I documented this in a study published in Regulation and Governance.

To estimate proposed regulations’ benefits and costs, analysts frequently rely on inputs that are uncertain —  sometimes substantially so.  Such uncertainties in underlying inputs are propagated through analyses, leading to uncertainty in ultimate benefit and cost estimates, which constitute the core of a Regulatory Impact Analysis (RIA), required by Presidential Executive Order for all “economically significant” proposed Federal regulations.

Despite this uncertainty, the most prominently displayed results in RIAs are typically single, apparently precise point estimates of benefits, costs, and net benefits (benefits minus costs), masking uncertainties inherent in their calculation and possibly obscuring tradeoffs among competing policy options.  Historically, efforts to address uncertainty in RIAs have been very limited, but guidance set forth in the U.S. Office of Management and Budget’s (OMB) Circular A‑4 on Regulatory Analysis has the potential to enhance the information provided in RIAs regarding uncertainty in benefit and cost estimates.  Circular A‑4 requires the development of a formal quantitative assessment of uncertainty regarding a regulation’s economic impact if either annual benefits or costs are expected to reach $1 billion.

Over the years, formal quantitative uncertainty assessments — known as Monte Carlo analyses — have become common in a variety of fields, including engineering, finance, and a number of scientific disciplines, as well as in “sabermetrics” (quantitative, especially statistical analysis of professional baseball), but rarely have such methods been employed in RIAs.

The first step in a Monte Carlo analysis involves the development of probability distributions of uncertain inputs to an analysis.  These probability distributions reflect the implications of uncertainty regarding an input for the range of its possible values and the likelihood that each value is the true value.  Once probability distributions of inputs to a benefit‑cost analysis are established, a Monte Carlo analysis is used to simulate the probability distribution of the regulation’s net benefits by carrying out the calculation of benefits and costs thousands, or even millions, of times.  With each iteration of the calculations, new values are randomly drawn from each input’s probability distribution and used in the benefit and/or cost calculations.  Over the course of these iterations, the frequency with which any given value is drawn for a particular input is governed by that input’s probability distribution.  Importantly, any correlations among individual items in the benefit and cost calculations are taken into account.  The resulting set of net benefit estimates characterizes the complete probability distribution of net benefits.

Uncertainty is inevitable in estimates of environmental regulations’ economic impacts, and assessments of the extent and nature of such uncertainty provides important information for policymakers evaluating proposed regulations.  Such information offers a context for interpreting benefit and cost estimates, and can lead to point estimates of regulations= benefits and costs that differ from what would be produced by purely deterministic analyses (that ignore uncertainty).  In addition, these assessments can help establish priorities for research.

Due to the complexity of interactions among uncertainties in inputs to RIAs, an accurate assessment of uncertainty can be gained only through the use of formal quantitative methods, such as Monte Carlo analysis.  Although these methods can offer significant insights, they require only limited additional effort relative to that already expended on RIAs.  Much of the data required for these analyses are already obtained by EPA in their preparation of RIAs; and widely available software allows the execution of Monte Carlo analysis in common spreadsheet programs on a desktop computer.  In a specific application in the Regulation and Governance study, Jaffe and I demonstrate the use and advantages of employing formal quantitative analysis of uncertainty in a review of EPA’s 2004 RIA for its Nonroad Diesel Rule.

Formal quantitative assessments of uncertainty can mark a truly significant step forward in enhancing regulatory analysis under Presidential Executive Orders.  They have the potential to improve substantially our understanding of the impact of environmental regulations, and thereby to lead to more informed policymaking.

Share