Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share

The Myth of the Universal Market

Communication among economists, other social scientists, natural scientists, and lawyers is far from perfect. When the topic is the environment, discourse across disciplines is both important and difficult. Economists themselves have likely contributed to some misunderstandings about how they think about the environment, perhaps through enthusiasm for market solutions, perhaps by neglecting to make explicit all of the necessary qualifications, and perhaps simply by the use of technical jargon.

So it shouldn’t come as a surprise that there are several prevalent and very striking myths about how economists think about the environment. Because of this, my colleague Don Fullerton, a professor of economics at the University of Illinois, and I posed the following question in an article in Nature:  how do economists really think about the environment? In this and several succeeding postings, I’m going to answer this question, by examining — in turn — several of the most prevalent myths.

One myth is that economists believe that the market solves all problems. Indeed, the “first theorem of welfare economics” states that private markets are perfectly efficient on their own, with no interference from government, so long as certain conditions are met. This theorem, easily proven, is exceptionally powerful, because it means that no one needs to tell producers of goods and services what to sell to which consumers. Instead, self-interested producers and self-interested consumers meet in the market place, engage in trade, and thereby achieve the greatest good for the greatest number, as if “guided by an invisible hand,” as Adam Smith wrote in 1776 in The Wealth of Nations. This notion of maximum general welfare is what economists mean by the “efficiency” of competitive markets.

Economists in business schools may be particularly fond of identifying markets where the necessary conditions are met, where many buyers and many sellers operate with very good information and very low transactions costs to trade well-defined commodities with enforced rights of ownership. These economists regularly produce studies demonstrating the efficiency of such markets (although even in this sphere, problems can obviously arise).

For other economists, especially those in public policy schools, the whole point of the first welfare theorem is very different. By clarifying the conditions under which markets are efficient, the theorem also identifies the conditions under which they are not. Private markets are perfectly efficient only if there are no public goods, no externalities, no monopoly buyers or sellers, no increasing returns to scale, no information problems, no transactions costs, no taxes, no common property, and no other distortions that come between the costs paid by buyers and the benefits received by sellers.

Those conditions are obviously very restrictive, and they are usually not all satisfied simultaneously. When a market thus “fails,” this same theorem offers us guidance on how to “round up the usual suspects.” For any particular market, the interesting questions are whether the number of sellers is sufficiently small to warrant antitrust action, whether the returns to scale are great enough to justify tolerating a single producer in a regulated market, or whether the benefits from the good are “public” in a way that might justify outright government provision of it. A public good, like the light from a light house, is one that can benefit additional users at no cost to society, or that benefits those who “free ride” without paying for it.

Environmental economists, of course, are interested in pollution and other externalities, where some consequences of producing or consuming a good or service are external to the market, that is, not considered by producers or consumers. With a negative externality, such as environmental pollution, the total social cost of production may thus exceed the value to consumers. If the market is left to itself, too many pollution-generating products get produced. There’s too much pollution, and not enough clean air, for example, to provide maximum general welfare. In this case, laissez-faire markets — because of the market failure, the externalities — are not efficient.

Similarly, natural resource economists are particularly interested in common property, or open-access resources, where anyone can extract or harvest the resource freely. In this case, no one recognizes the full cost of using the resource; extractors consider only their own direct and immediate costs, not the costs to others of increased scarcity (called “user cost” or “scarcity rent” by economists). The result, of course, is that the resource is depleted too quickly. These markets are also inefficient.

So, the market by itself demonstrably does not solve all problems. Indeed, in the environmental domain, perfectly functioning markets are the exception, rather than the rule. Governments can try to correct these market failures, for example by restricting pollutant emissions or limiting access to open-access resources. Such government interventions will not necessarily make the world better off; that is, not all public policies will pass an efficiency test. But if undertaken wisely, government interventions can improve welfare, that is, lead to greater efficiency. I will turn to such interventions in a subsequent posting.

Share