Three Pillars of a New Climate Pact

THE climate change summit at the United Nations on Tuesday, September 22nd,  is aimed to build momentum for the 15th Conference of the Parties to the UN Framework Convention on Climate Change in Copenhagen in December, where nations will continue negotiations on a successor to the 1997 Kyoto Protocol, which expires in 2012.   Later this week, the G20 finance ministers will meet in Pittsburgh, Pennsylvania, where international climate policy will be high on the agenda.

In the midst of this, Professor Sheila Olmstead of Yale University and I wrote an opinion piece which appeared as an op-ed in The Boston Globe on Sunday, September 20th.  (See the original here, with the artwork; and/or for a detailed description of our proposal, see our discussion paper for the Harvard Project on International Climate Agreements.)

In the op-ed, we argued that to be successful, any feasible successor agreement must contain three essential elements: meaningful involvement by a broad set of key industrialized and developing nations; an emphasis on an extended time path of emissions targets; and inclusion of policy approaches that work through the market, rather than against it.

Consider the need for broad participation. Industrialized countries have emitted most of the stock of man-made carbon dioxide in our atmosphere, so shouldn’t they reduce emissions before developing countries are asked to contribute? While this seems to make sense, here are four reasons why the new climate agreement must engage all major emitting countries – both industrialized and developing.

First, emissions from developing countries are significant and growing rapidly. China surpassed the United States as the world’s largest CO2 emitter in 2006, and developing countries may account for more than half of global emissions within the next decade. Second, developing countries provide the best opportunities for low-cost emissions reduction; their participation could dramatically reduce total costs. Third, the United States and several other industrialized countries may not commit to significant emissions reductions without developing country participation. Fourth, if developing countries are excluded, up to one-third of carbon emissions reductions by participating countries may migrate to non-participating economies through international trade, reducing environmental gains and pushing developing nations onto more carbon-intensive growth paths (so-called “carbon leakage’’).

How can developing countries participate in an international effort to reduce emissions without incurring costs that derail their economic development? Their emissions targets could start at business-as-usual levels, becoming more stringent over time as countries become wealthier. If such “growth targets’’ were combined with an international emission trading program, developing countries could fully participate without incurring prohibitive costs (or even any costs in the short term).  (For a very insightful analysis of such growth targets, please see Harvard Professor Jeffrey Frankel‘s discussion paper for the Harvard Project on International Climate Agreements.)

The second pillar of a successful post-2012 climate policy is an emphasis on the long run. Greenhouse gases remain in the atmosphere for decades to centuries, and major technological change is needed to bring down the costs of reducing CO2 emissions. The economically efficient solution will involve firm but moderate short-term targets to avoid rendering large parts of the capital stock prematurely obsolete, and flexible but more stringent long-term targets.

Third, a post-2012 global climate policy must work through the market rather than against it. To keep costs down in the short term and bring them down even lower in the long term through technological change, market-based policy instruments must be embraced as the chief means of reducing emissions. One market-based approach, known as cap-and-trade, is emerging as the preferred approach for reducing carbon emissions among industrialized countries.

Under cap-and-trade, sources with low control costs may take on added reductions, allowing them to sell excess permits to sources with high control costs. The European Union’s Emission Trading Scheme, established under the Kyoto Protocol, is the world’s largest cap-and-trade system. In June, the US federal government took a significant step toward establishing a national cap-and-trade policy to reduce CO2 emissions, with the passage in the House of Representatives of the American Clean Energy and Security Act (about which I have written in many previous posts at this blog). Other industrialized countries are instituting or planning national CO2 cap-and-trade systems, including Australia, Canada, Japan, and New Zealand.

Linking such cap-and-trade systems under a new international climate treaty would bring cost savings from increasing the market’s scope, greater liquidity, reduced price volatility, lessened market power, and reduced carbon leakage. Cap-and-trade systems can be linked directly, which requires harmonization, or indirectly by linking with a common emissions-reduction credit system; indeed, this is what appears to be emerging even before a new agreement is forged. Kyoto’s Clean Development Mechanism allows parties in wealthy countries to purchase emissions-reduction credits in developing countries by investing in emissions-reduction projects. These credits can be used to meet emissions commitments within the EU-ETS, and other systems are likely to accept them as well.

Countries meeting in New York and Pittsburgh this week, and in Copenhagen in December, should consider these three essential elements as they negotiate a new climate agreement. A new international climate agreement missing any of these three pillars may be too costly, and provide too little benefit, to represent a meaningful attempt to address the threat of global climate change.

Share

Too Good to be True?

Global climate change is a serious environmental threat, and sound public policies are needed to address it effectively and sensibly.

There is now significant interest and activity within both the U.S. Administration and the U.S. Congress to develop a meaningful national climate policy in this country.  (If you’re interested, please see some of my previous posts:  “Opportunity for a Defining Moment” (February 6, 2009); “The Wonderful Politics of Cap-and-Trade:  A Closer Look at Waxman-Markey” (May 27, 2009); “Worried About International Competitiveness?  Another Look at the Waxman-Markey Cap-and-Trade Proposal” (June 18, 2009); “National Climate Change Policy:  A Quick Look Back at Waxman-Markey and the Road Ahead” (June 29, 2009).  For a more detailed account, see my Hamilton Project paper, A U.S. Cap-and-Trade System to Address Global Climate Change.)

And as we move toward the international negotiations to take place in December of this year in Copenhagen, it is important to keep in mind the global commons nature of the problem, and hence the necessity of designing and implementing an international policy architecture that is scientifically sound, economically rational, and politically pragmatic.

Back in the U.S., with domestic action delayed in the Senate, several states and regions in the United States have moved ahead with their own policies and plans.  Key among these is California’s Global Warming Solutions Act of 2006, intended to return the state’s greenhouse gas (GHG) emissions in 2020 to their 1990 level.  In 2006, three studies were released indicating that California can meet its 2020 target at no net economic cost.  That is not a typographical error.  The studies found not simply that the costs will be low, but that the costs will be zero, or even negative!  That is, the studies found that California’s ambitious target can be achieved through measures whose direct costs would be outweighed by offsetting savings they create, making them economically beneficial even without considering the emission reductions they may achieve.  Not just a free lunch, but a lunch we are paid to eat!

Given the substantial emission reductions that will be required to meet California’s 2020 target, these findings are ­- to put it mildly – surprising, and they differ dramatically from the vast majority of economic analyses of the cost of reducing GHG emissions.  As a result, I was asked by the Electric Power Research Institute – along with my colleagues, Judson Jaffe and Todd Schatzki of Analysis Group – to evaluate the three California studies.

In a report titled, “Too Good To Be True?  An Examination of Three Economic Assessments of California Climate Change Policy,” we found that although some limited opportunities may exist for no-cost emission reductions, the studies substantially underestimated the cost of meeting California’s 2020 target — by omitting important components of the costs of emission reduction efforts, and by overestimating offsetting savings some of those efforts yield through improved energy efficiency.  In some cases, the studies focused on the costs of particular actions to reduce emissions, but failed to consider the effectiveness and costs of policies that would be necessary to bring about those actions.  Just a few of the flaws we identified lead to underestimation of annual costs on the order of billions of dollars.  Sadly, the studies therefore did not and do not offer reliable estimates of the cost of meeting California’s 2020 target.

This episode is a reminder of a period when similar studies were performed by the U.S. Department of Energy at the time of the Kyoto Protocol negotiations.  Like the California studies, the DOE (Interlaboratory Work Group) studies in the late 1990s suggested that substantial emission reductions could be achieved at no cost.  Those studies were terribly flawed, which was what led to their faulty conclusions.  I had thought that such arguments about massive “free lunches” in the energy efficiency and climate domain had long since been laid to rest.  The debates in California (and some of the rhetoric in Washington) prove otherwise.

While the Global Warming Solutions Act of 2006 sets an emissions target, critical policy design decisions remain to be made that will fundamentally affect the cost of the policy.  For example, policymakers must determine the emission sources that will be regulated to meet those targets, and the policy instruments that will be employed.  The California studies do not directly address the cost implications of these and other policy design decisions, and their overly optimistic findings may leave policymakers with an inadequate appreciation of the stakes associated with the decisions that lie ahead.

On the positive side, a careful evaluation of the California studies highlights some important policy design lessons that apply regardless of the extent to which no-cost emission reduction opportunities really exist.  Policies should be designed to account for uncertainty regarding emission reduction costs, much of which will not be resolved before policies must be enacted.  Also, consideration of the market failures that lead to excessive GHG emissions makes clear that to reduce emissions cost-effectively, policymakers should employ a market-based policy (such as cap-and-trade) as the core policy instrument.

The fact that the three California studies so egregiously underestimated the costs of achieving the goals of the Global Warming Solutions Act should not be taken as indicating that the Act itself is necessarily without merit.  As I have discussed in previous posts, that judgment must rest – from an economic perspective – on an honest and rigorous comparison of the Act’s real benefits and real costs.

Share

Policies Can Work in Strange Ways

Whether the policy domain is global climate change or local hazardous waste, it’s exceptionally important to understand the interaction between public policies and technological change in order to assess the effects of laws and regulations on environmental performance.  Several years ago, my colleagues ­- Professor Lori Bennear of Duke University and Professor Nolan Miller of the University of Illinois – examined with me the effects of regulation on technological change in chlorine manufacturing by focusing on the diffusion of membrane-cell technology, widely viewed as environmentally superior to both mercury-cell and diaphragm-cell technologies.  Our results were both interesting and surprising, and merit thinking about in the context of current policy discussions and debates in Washington.

The chlorine manufacturing industry had experienced a substantial shift over time toward the membrane technology. Two different processes drove this shift:  adoption of cleaner technologies at existing plants (that is, adoption), and the closing of facilities using diaphragm and mercury cells (in other words, exit).  In our study, we considered the effects of both direct regulation of chlorine manufacturing and regulation of downstream uses of chlorine.    (By the way, you can read a more detailed version of this story in our article in the American Economic Review Papers and Proceedings, volume 93, 2003, pp. 431-435.)

In 1972, a widely publicized incident of mercury poisoning in Minamata Bay, Japan, led the Japanese government to prohibit the use of mercury cells for chlorine production. The United States did not follow suit, but it did impose more stringent constraints on mercury-cell units during the early 1970’s. Subsequently, chlorine manufacturing became subject to increased regulation under the Clean Air Act, the Clean Water Act, the Resource Conservation and Recovery Act, and the Comprehensive Environmental Response, Compensation, and Liability Act.  In addition, chlorine manufacturing became subject to public-disclosure requirements under the Toxics Release Inventory.

In addition to regulation of the chlorine manufacturing process, there was also increased environmental pressure on industries that used chlorine as an input. This indirect regulation was potentially important for choices of chlorine manufacturing technology because a large share of chlorine was and is manufactured for onsite use in the production of other products. Changes in regulations in downstream industries can have substantial impacts on the demand for chlorine and thereby affect the rate of entry and exit of chlorine production plants.

Two major indirect regulations altered the demand for chlorine. One was the Montreal Protocol, which regulated the production of ozone-depleting chemicals, such as chlorofluorocarbons (CFCs), for which chlorine is a key ingredient. The other important indirect regulation was the “Cluster Rule,” which tightened restrictions on the release of chlorinated compounds from pulp and paper mills to both water and air. This led to increased interest by the industry in non-chlorine bleaching agents, which in turn affected the economic viability of some chlorine plants.

In our econometric (statistical) analysis, we analyzed the effects of economic and regulatory factors on adoption and exit decisions by chlorine manufacturing plants from 1976 to 2001.  For our analysis of adoption, we employed data on 51 facilities, eight of which had adopted the membrane technology during the period we investigated.

We found that the effects of the regulations on the likelihood of adopting membrane technology were not statistically significant.  Mercury plants, which were subject to stringent regulation for water, air, and hazardous-waste removal, were no more likely to switch to the membrane technology than diaphragm plants. Similarly, TRI reporting appeared to have had no significant effect on adoption decisions.

We also examined what caused plants to exit the industry, with data on 55 facilities, 21 of which ceased operations between 1976 and 2001. Some interesting and quite striking patterns emerged. Regulations clearly explained some of the exit behavior.  In particular, indirect regulations of the end-uses of chlorine accelerated shutdowns in some industries. Facilities affected by the pulp and paper cluster rule and the Montreal Protocol were substantially more likely to shut down than were other facilities.

It is good to remember that the diffusion of new technology is the result of a combination of adoption at existing facilities and entry and exit of facilities with various technologies in place. In the case of chlorine manufacturing, our results indicated that regulatory factors did not have a significant effect on the decision to adopt the greener technology at existing plants. On the other hand, indirect regulation of the end-uses of chlorine accelerated facility closures significantly, and thereby increased the share of plants using the cleaner, membrane technology for chlorine production.

Environmental regulation did affect technological change, but not in the way many people assume it does. It did so not by encouraging the adoption of some technology by existing facilities, but by reducing the demand for a product and hence encouraging the shutdown of facilities using environmentally inferior options.  This is a legitimate way for policies to operate, although it’s one most politicians would probably prefer not to recognize.

Share