Policies Can Work in Strange Ways

Whether the policy domain is global climate change or local hazardous waste, it’s exceptionally important to understand the interaction between public policies and technological change in order to assess the effects of laws and regulations on environmental performance.  Several years ago, my colleagues ­- Professor Lori Bennear of Duke University and Professor Nolan Miller of the University of Illinois – examined with me the effects of regulation on technological change in chlorine manufacturing by focusing on the diffusion of membrane-cell technology, widely viewed as environmentally superior to both mercury-cell and diaphragm-cell technologies.  Our results were both interesting and surprising, and merit thinking about in the context of current policy discussions and debates in Washington.

The chlorine manufacturing industry had experienced a substantial shift over time toward the membrane technology. Two different processes drove this shift:  adoption of cleaner technologies at existing plants (that is, adoption), and the closing of facilities using diaphragm and mercury cells (in other words, exit).  In our study, we considered the effects of both direct regulation of chlorine manufacturing and regulation of downstream uses of chlorine.    (By the way, you can read a more detailed version of this story in our article in the American Economic Review Papers and Proceedings, volume 93, 2003, pp. 431-435.)

In 1972, a widely publicized incident of mercury poisoning in Minamata Bay, Japan, led the Japanese government to prohibit the use of mercury cells for chlorine production. The United States did not follow suit, but it did impose more stringent constraints on mercury-cell units during the early 1970’s. Subsequently, chlorine manufacturing became subject to increased regulation under the Clean Air Act, the Clean Water Act, the Resource Conservation and Recovery Act, and the Comprehensive Environmental Response, Compensation, and Liability Act.  In addition, chlorine manufacturing became subject to public-disclosure requirements under the Toxics Release Inventory.

In addition to regulation of the chlorine manufacturing process, there was also increased environmental pressure on industries that used chlorine as an input. This indirect regulation was potentially important for choices of chlorine manufacturing technology because a large share of chlorine was and is manufactured for onsite use in the production of other products. Changes in regulations in downstream industries can have substantial impacts on the demand for chlorine and thereby affect the rate of entry and exit of chlorine production plants.

Two major indirect regulations altered the demand for chlorine. One was the Montreal Protocol, which regulated the production of ozone-depleting chemicals, such as chlorofluorocarbons (CFCs), for which chlorine is a key ingredient. The other important indirect regulation was the “Cluster Rule,” which tightened restrictions on the release of chlorinated compounds from pulp and paper mills to both water and air. This led to increased interest by the industry in non-chlorine bleaching agents, which in turn affected the economic viability of some chlorine plants.

In our econometric (statistical) analysis, we analyzed the effects of economic and regulatory factors on adoption and exit decisions by chlorine manufacturing plants from 1976 to 2001.  For our analysis of adoption, we employed data on 51 facilities, eight of which had adopted the membrane technology during the period we investigated.

We found that the effects of the regulations on the likelihood of adopting membrane technology were not statistically significant.  Mercury plants, which were subject to stringent regulation for water, air, and hazardous-waste removal, were no more likely to switch to the membrane technology than diaphragm plants. Similarly, TRI reporting appeared to have had no significant effect on adoption decisions.

We also examined what caused plants to exit the industry, with data on 55 facilities, 21 of which ceased operations between 1976 and 2001. Some interesting and quite striking patterns emerged. Regulations clearly explained some of the exit behavior.  In particular, indirect regulations of the end-uses of chlorine accelerated shutdowns in some industries. Facilities affected by the pulp and paper cluster rule and the Montreal Protocol were substantially more likely to shut down than were other facilities.

It is good to remember that the diffusion of new technology is the result of a combination of adoption at existing facilities and entry and exit of facilities with various technologies in place. In the case of chlorine manufacturing, our results indicated that regulatory factors did not have a significant effect on the decision to adopt the greener technology at existing plants. On the other hand, indirect regulation of the end-uses of chlorine accelerated facility closures significantly, and thereby increased the share of plants using the cleaner, membrane technology for chlorine production.

Environmental regulation did affect technological change, but not in the way many people assume it does. It did so not by encouraging the adoption of some technology by existing facilities, but by reducing the demand for a product and hence encouraging the shutdown of facilities using environmentally inferior options.  This is a legitimate way for policies to operate, although it’s one most politicians would probably prefer not to recognize.

Share

Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share

The Wonderful Politics of Cap-and-Trade: A Closer Look at Waxman-Markey

The headline of this post is not meant to be ironic.   Despite all the hand-wringing in the press and the blogosphere about a political “give-away” of allowances for the cap-and-trade system in the Waxman-Markey bill voted out of committee last week, the politics of cap-and-trade systems are truly quite wonderful, which is why these systems have been used, and used successfully.

The Waxman-Markey allocation of allowances has its problems, which I will get to, but before noting those problems it is exceptionally important to keep in mind what is probably the key attribute of cap-and-trade systems:  the allocation of allowances – whether the allowances are auctioned or given out freely, and how they are freely allocated – has no impact on the equilibrium distribution of allowances (after trading), and therefore no impact on the allocation of emissions (or emissions abatement), the total magnitude of emissions, or the aggregate social costs.  (Well, there are some relatively minor, but significant caveats – those “problems” I mentioned — about which more below.)  By the way, this independence of a cap-and-trade system’s performance from the initial allowance allocation was established as far back as 1972 by David Montgomery in a path-breaking article in the Journal of Economic Theory (based upon his 1971 Harvard economics Ph.D. dissertation). It has been validated with empirical evidence repeatedly over the years.

Generally speaking, the choice between auctioning and freely allocating allowances does not influence firms’ production and emission reduction decisions.  Firms face the same emissions cost regardless of the allocation method.  When using an allowance, whether it was received for free or purchased, a firm loses the opportunity to sell that allowance, and thereby recognizes this “opportunity cost” in deciding whether to use the allowance.  Consequently, the allocation choice will not influence a cap’s overall costs.

Manifest political pressures lead to different initial allocations of allowances, which affect distribution, but not environmental effectiveness, and not cost-effectiveness.  This means that ordinary political pressures need not get in the way of developing and implementing a scientifically sound, economically rational, and politically pragmatic policy.  Contrast this with what would happen when political pressures are brought to bear on a carbon tax proposal, for example.  Here the result will most likely be exemptions of sectors and firms, which reduces environmental effectiveness and drives up costs (as some low-cost emission reduction opportunities are left off the table).  Furthermore, the hypothetical carbon tax example is the norm, not the exception.  Across the board, political pressures often reduce the effectiveness and increase the cost of well-intentioned public policies.  Cap-and-trade provides natural protection from this.  Distributional battles over the allowance allocation in a cap-and-trade system do not raise the overall cost of the program nor affect its environmental impacts.

In fact, the political process of states, districts, sectors, firms, and interest groups fighting for their share of the pie (free allowance allocations) serves as the mechanism whereby a political constituency in support of the system is developed, but without detrimental effects to the system’s environmental or economic performance.  That’s the good news, and it should never be forgotten.

But, depending upon the specific allocation mechanisms employed, there are several ways that the choice to freely distribute allowances can affect a system’s cost.  Here’s where the “caveats” and “problems” come in.

First, auction revenue may be used in ways that reduce the costs of the existing tax system or fund other socially beneficial policies.  Free allocations to the private sector forego such opportunities.  Below I will estimate the actual share of allowance value that accrues to the private sector.

Second, some proposals to freely allocate allowances to electric utilities may affect electricity prices, and thereby affect the extent to which reduced electricity demand contributes to limiting emissions cost-effectively.  Waxman-Markey allocates allowances to local distribution companies, which are subject to cost-of-service regulation even in regions with restructured wholesale electricity markets.  So, electricity prices would likely be affected by these allocations under existing state regulatory regimes.  The Waxman-Markey legislation seeks to address this problem by specifying that the economic value of the allowances given to electricity and natural gas local distribution companies should be passed on to consumers through lump-sum rebates, not through a reduction in electricity rates, thereby compensating consumers for increases in electricity prices, but without reducing incentives for energy conservation.

Third, and of most concern in the context of the Waxman-Markey legislation, “output-based updating allocations” provide perverse incentives and drive up costs of achieving a cap.  This merits some explanation.  If allowances are freely allocated, the allocation should be on the basis of some historical measures, such as output or emissions in a (previous) base year, not on the basis of measures which firms can affect, such as output or emissions in the current year.  Updating allocations, which involve periodically adjusting allocations over time to reflect changes in firms’ operations, contrast with this.

An output-based updating allocation ties the quantity of allowances that a firm receives to its output (production).  Such an allocation is essentially a production subsidy.  This distorts firms’ pricing and production decisions in ways that can introduce unintended consequences and may significantly increase the cost of meeting an emissions target.  Updating therefore has the potential to create perverse, undesirable incentives.

In Waxman-Markey, updating allocations are used for specific sectors with high CO2 emissions intensity and unusual sensitivity to international competition, in an effort to preserve international competitiveness and reduce emissions leakage.  It’s an open question whether this approach is superior to an import allowance requirement, whereby imports of a small set of specific commodities must carry with them CO2 allowances.  The problem with import allowance requirements is that they can damage international trade relations.  The only real solution to the competitiveness issue is to bring non-participating countries within an international climate regime in meaningful ways.  (On this, please see the work of the Harvard Project on International Climate Agreements.)

Also, output-based allocations are used in Waxman-Markey for merchant coal generators, thereby discouraging reductions in coal-fired electricity generation, another significant and costly distortion.

Now, let’s go back to the hand-wringing in the press and blogosphere about the so-called massive political “give-away” of allowances.  Perhaps unintentionally, there has been some misleading press coverage, suggesting that up to 75% or 80% of the allowances are given away to private industry as a windfall over the life of the program, 2012-2050 (in contrast with the 100% auction originally favored by President Obama).

Given the nature of the allowance allocation in the Waxman-Markey legislation, the best way to assess its implications is not as “free allocation” versus “auction,” but rather in terms of who is the ultimate beneficiary of each element of the allocation and auction, that is, how the value of the allowances is allocated.  On closer inspection, it turns out that many of the elements of the apparently free allocation accrue to consumers and public purposes, not private industry.

First of all, let’s looks at the elements which will accrue to consumers and public purposes.  Next to each allocation element is the respective share of allowances over the period 2012-2050 (measured as share of the cap, after the removal – sale — of allowances to private industry from a “strategic reserve,” which functions as a cost-containment measure.):

a.  Electricity and natural gas local distribution companies (22.2%), minus share (6%) that benefits industry as consumers of electricity (note:  there is a consequent 3% reduction in the allocation to energy-intensive trade-exposed industries, below, which is then dedicated to broad-based consumer rebates, below), 22.2 – 6 = 16.2%

b.  Home heating oil/propane, 0.9%

c.  Protection for low- and moderate-income households, 15.0%

d.  Worker assistance and job training, 0.8%

e.  States for renewable energy, efficiency, and building codes, 5.8%

f.   Clean energy innovation centers, 1.0%

g.  International deforestation, clean technology, and adaptation, 8.7%

h.  Domestic adaptation, 5.0%

The following elements will accrue to private industry, again with average (2012-2050) shares of allowances:

i.   Merchant coal generators, 3.0%

j.   Energy-intensive, trade-exposed industries (minus reduction in allocation due to EITE benefits from LDC allocation above) 8.0% – 3% = 5%

k.  Carbon-capture and storage incentives, 4.1%

l.   Clean vehicle technology standards, 1.0%

m. Oil refiners, 1.0%

n.  Net benefits to industry as consumers of lower-priced electricity from allocation to LDCs, 6.0%

The split over the entire period from 2012 to 2050 is 53.4% for consumers and public purposes, and 20.1% for private industry.  This 20% is drastically different from the suggestions that 70%, 80%, or more of the allowances will be given freely to private industry in a “massive corporate give-away.”

All categories – (a) through (n), above – sum to 73.5% of the total quantity of allowances over the period 2012-2050.  The remaining allowances — 26.5% over 2012 to 2050 — are scheduled in Waxman-Markey to be used almost entirely for consumer rebates, with the share of available allowances for this purpose rising from approximately 10% in 2025 to more than 50% by 2050.  Thus, the totals become 79.9% for consumers and public purposes versus 20.1% for private industry, or approximately 80% versus 20% — the opposite of the “80% free allowance corporate give-away” featured in many press and blogosphere accounts.  Moreover, because some of the allocations to private industry are – for better or for worse – conditional on recipients undertaking specific costly investments, such as investments in carbon capture and storage, part of the 20% free allocation to private industry should not be viewed as a windfall.

Speaking of the conditional allocations, I should also note that some observers (who are skeptical about government programs) may reasonably question some of the dedicated public purposes of the allowance distribution, but such questioning is equivalent to questioning dedicated uses of auction revenues.  The fundamental reality remains:  the appropriate characterization of the Waxman-Markey allocation is that 80% of the value of allowances go to consumers and public purposes, and 20% to private industry.

Finally, it should be noted that this 80-20 split is roughly consistent with empirical economic analyses of the share that would be required – on average — to fully compensate (but no more) private industry for equity losses due to the policy’s implementation.  In a series of analyses that considered the share of allowances that would be required in perpetuity for full compensation, Bovenberg and Goulder (2003) found that 13 percent would be sufficient for compensation of the fossil fuel extraction sectors, and Smith, Ross, and Montgomery (2002) found that 21 percent would be needed to compensate primary energy producers and electricity generators.

In my work for the Hamilton Project in 2007, I recommended beginning with a 50-50 auction-free-allocation split, moving to 100% auction over 25 years, because that time-path of numerical division between the share of allowances that is freely allocated to regulated firms and the share that is auctioned is equivalent (in terms of present discounted value) to perpetual allocations of 15 percent, 19 percent, and 22 percent, at real interest rates of 3, 4, and 5 percent, respectively.  My recommended allocation was designed to be consistent with the principal of targeting free allocations to burdened sectors in proportion to their relative burdens, while being politically pragmatic with more generous allocations in the early years of the program.

So, the Waxman-Markey 80/20 allowance split turns out to be consistent  — on average, i.e. economy-wide — with independent economic analysis of the share that would be required to fully compensate (but no more) the private sector for equity losses due to the imposition of the cap, and consistent with my Hamilton Project recommendation of a 50/50 split phased out to 100% auction over 25 years.

Going forward, many observers and participants in the policy process may continue to question the wisdom of some elements of the Waxman-Markey allowance allocation.  There’s nothing wrong with that.

But let’s be clear that, first, for the most part, the allocation of allowances affects neither the environmental performance of the cap-and-trade system nor its aggregate social cost.

Second, questioning should continue about the output-based allocation elements, because of the perverse incentives they put in place.

Third, we should be honest that the legislation, for all its flaws, is by no means the “massive corporate give-away” that it has been labeled.  On the contrary, 80% of the value of allowances accrue to consumers and public purposes, and some 20% accrue to covered, private industry.  This split is roughly consistent with the recommendations of independent economic research.

Fourth and finally, it should not be forgotten that the much-lamented deal-making that took place in the House committee last week for shares of the allowances for various purposes was a good example of the useful, important, and fundamentally benign mechanism through which a cap-and-trade system provides the means for a political constituency of support and action to be assembled (without reducing the policy’s effectiveness or driving up its cost).

Although there has surely been some insightful press coverage and intelligent public debate (including in the blogosphere) about the pros and cons of cap-and-trade, the Waxman-Markey legislation, and many of its design elements, it is remarkable (and unfortunate) how misleading so much of the coverage has been of the issues and the numbers surrounding the proposed allowance allocation.

Share