Policies Can Work in Strange Ways

Whether the policy domain is global climate change or local hazardous waste, it’s exceptionally important to understand the interaction between public policies and technological change in order to assess the effects of laws and regulations on environmental performance.  Several years ago, my colleagues ­- Professor Lori Bennear of Duke University and Professor Nolan Miller of the University of Illinois – examined with me the effects of regulation on technological change in chlorine manufacturing by focusing on the diffusion of membrane-cell technology, widely viewed as environmentally superior to both mercury-cell and diaphragm-cell technologies.  Our results were both interesting and surprising, and merit thinking about in the context of current policy discussions and debates in Washington.

The chlorine manufacturing industry had experienced a substantial shift over time toward the membrane technology. Two different processes drove this shift:  adoption of cleaner technologies at existing plants (that is, adoption), and the closing of facilities using diaphragm and mercury cells (in other words, exit).  In our study, we considered the effects of both direct regulation of chlorine manufacturing and regulation of downstream uses of chlorine.    (By the way, you can read a more detailed version of this story in our article in the American Economic Review Papers and Proceedings, volume 93, 2003, pp. 431-435.)

In 1972, a widely publicized incident of mercury poisoning in Minamata Bay, Japan, led the Japanese government to prohibit the use of mercury cells for chlorine production. The United States did not follow suit, but it did impose more stringent constraints on mercury-cell units during the early 1970’s. Subsequently, chlorine manufacturing became subject to increased regulation under the Clean Air Act, the Clean Water Act, the Resource Conservation and Recovery Act, and the Comprehensive Environmental Response, Compensation, and Liability Act.  In addition, chlorine manufacturing became subject to public-disclosure requirements under the Toxics Release Inventory.

In addition to regulation of the chlorine manufacturing process, there was also increased environmental pressure on industries that used chlorine as an input. This indirect regulation was potentially important for choices of chlorine manufacturing technology because a large share of chlorine was and is manufactured for onsite use in the production of other products. Changes in regulations in downstream industries can have substantial impacts on the demand for chlorine and thereby affect the rate of entry and exit of chlorine production plants.

Two major indirect regulations altered the demand for chlorine. One was the Montreal Protocol, which regulated the production of ozone-depleting chemicals, such as chlorofluorocarbons (CFCs), for which chlorine is a key ingredient. The other important indirect regulation was the “Cluster Rule,” which tightened restrictions on the release of chlorinated compounds from pulp and paper mills to both water and air. This led to increased interest by the industry in non-chlorine bleaching agents, which in turn affected the economic viability of some chlorine plants.

In our econometric (statistical) analysis, we analyzed the effects of economic and regulatory factors on adoption and exit decisions by chlorine manufacturing plants from 1976 to 2001.  For our analysis of adoption, we employed data on 51 facilities, eight of which had adopted the membrane technology during the period we investigated.

We found that the effects of the regulations on the likelihood of adopting membrane technology were not statistically significant.  Mercury plants, which were subject to stringent regulation for water, air, and hazardous-waste removal, were no more likely to switch to the membrane technology than diaphragm plants. Similarly, TRI reporting appeared to have had no significant effect on adoption decisions.

We also examined what caused plants to exit the industry, with data on 55 facilities, 21 of which ceased operations between 1976 and 2001. Some interesting and quite striking patterns emerged. Regulations clearly explained some of the exit behavior.  In particular, indirect regulations of the end-uses of chlorine accelerated shutdowns in some industries. Facilities affected by the pulp and paper cluster rule and the Montreal Protocol were substantially more likely to shut down than were other facilities.

It is good to remember that the diffusion of new technology is the result of a combination of adoption at existing facilities and entry and exit of facilities with various technologies in place. In the case of chlorine manufacturing, our results indicated that regulatory factors did not have a significant effect on the decision to adopt the greener technology at existing plants. On the other hand, indirect regulation of the end-uses of chlorine accelerated facility closures significantly, and thereby increased the share of plants using the cleaner, membrane technology for chlorine production.

Environmental regulation did affect technological change, but not in the way many people assume it does. It did so not by encouraging the adoption of some technology by existing facilities, but by reducing the demand for a product and hence encouraging the shutdown of facilities using environmentally inferior options.  This is a legitimate way for policies to operate, although it’s one most politicians would probably prefer not to recognize.

Share

What Role for U.S. Carbon Sequestration?

With the development of climate legislation proceeding in the U.S. Senate, a key question is whether the United States can cost-effectively reduce a significant share of its contributions to increased atmospheric CO2 concentrations through forest-based carbon sequestration.  Should biological carbon sequestration be part of the domestic portfolio of compliance activities?

The potential costs of carbon sequestration policies should be one major criterion, and so it can be helpful to assess the cost of supplying forest-based carbon sequestration.  This is a topic which I’ve investigated in a series of papers with various co-authors over the past ten years (“Land-Use Change and Carbon Sinks: Econometric Estimation of the Carbon Sequestration Supply Function.” Journal of Environmental Economics and Management 51(2006): 135-152, with Ruben Lubowski and Andrew Plantinga; “Climate Change and Forest Sinks: Factors Affecting the Costs of Carbon Sequestration.” Journal of Environmental Economics and Management 40(2000):211-235, with Richard Newell; and “The Costs of Carbon Sequestration: A Revealed-Preference Approach.” American Economic Review, volume 89, number 4, September 1999, pp. 994-1009.)   Most useful for policy purposes is probably the 2005 report Kenneth Richards and I wrote for the Pew Center on Global Climate Change (“The Cost of U.S. Forest-Based Carbon Sequestration”).  In that report, we surveyed and synthesized the best cost estimates from all available sources.

Human activities — particularly the extraction and burning of fossil fuels and the depletion of forests — are causing the level of CO2 in the atmosphere to rise.  It may be possible to increase the rate at which ecosystems remove CO2 from the atmosphere and store the carbon in plant material, decomposing detritus, and organic soil.  In essence, forests and other highly productive ecosystems can become biological scrubbers by removing (sequestering) CO2 from the atmosphere.  Much of the current interest in carbon sequestration has been prompted by suggestions that sufficient lands are available to use sequestration for mitigating significant shares of annual CO2 emissions, and related claims that this approach provides a relatively inexpensive means of addressing climate change.  In other words, the fact that policy makers are giving serious attention to carbon sequestration can partly be explained by (implicit) assertions about its marginal cost, or (in economists’ parlance) its supply function, relative to other mitigation options.

Among the key factors that affect estimates of the cost of forest carbon sequestration are: (1) the tree species involved, forestry practices utilized, and related rates of carbon uptake over time; (2) the opportunity cost of the land-that is, the value of the affected land for alternative uses; (3) the disposition of biomass through burning, harvesting, and forest product sinks; (4) anticipated changes in forest and agricultural product prices; (5) the analytical methods used to account for carbon flows over time; (6) the discount rate employed in the analysis; and (7) the policy instruments used to achieve a given carbon sequestration target.

Given the diverse set of factors that affect the cost and quantity of potential forest carbon sequestration in the United States, it should not be surprising that cost studies have produced a broad range of estimates.  Ken Richards and I identified eleven previous analyses that were good candidates for comparison and synthesis, and we made their results mutually consistent by adjusting them for constant-year dollars, use of equivalent annual costs as outcome measures, identical discount rates, and identical geographic scope.  We also employed econometric methods to estimate the central tendency (or “best-fit”) of the normalized marginal cost functions from the eleven studies as a rough guide for policy makers of the projected availability of carbon sequestration at various costs.

Three major conclusions emerged from our survey and synthesis.  First, there is a broad range of possible forest-based carbon sequestration opportunities available at various magnitudes and associated costs.  The range depends upon underlying biological and economic assumptions, as well as analytical cost-estimation methods employed.

Second, a systematic comparison of sequestration supply estimates from national studies produces a range of $25 to $75 per ton for a program size of 300 million tons of annual carbon sequestration. The range increases somewhat- to $30-$90 per ton of carbon-for programs sequestering 500 million tons annually.

Third, when a transparent and accessible econometric technique was employed to estimate the central tendency (or “best-fit”) of costs estimated in the studies, the resulting supply function for forest-based carbon sequestration in the United States is approximately linear up to 500 million tons of carbon per year, at which point marginal costs reach approximately $70 per ton.

A 500 million ton per year sequestration program would be very significant, offsetting approximately one-third of annual U.S. carbon emissions.  At this level, the estimated costs of carbon sequestration are comparable to typical estimates of the costs of emissions abatement through fuel switching and energy efficiency improvements.  This result indicates that sequestration opportunities ought to be included in the economic modeling of climate policies.  And it further suggest that if it is possible to design and implement a domestic carbon sequestration program, then such a program ought to be included in a cost-effective portfolio of compliance strategies when the United States enacts a mandatory domestic greenhouse gas reduction program.  Large-scale forest-based carbon sequestration can be a cost-effective tool that should be considered seriously by policy makers.

Of course, this raises the question of whether a policy that will bring about such biological carbon sequestration cost-effectively can be developed, whether as part of a cap-and-trade system, a related offset scheme, or through some other policy mechanism.  That is a question without easy answers (as I’ve noted in a previous post on the Waxman-Markey legislation), but the cost analyses I’ve reviewed in this post suggest that it is important to explore possible ways of incorporating biological carbon sequestration in future U.S. climate policy.

Share

Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share