Unintended Consequences of Government Policies: The Depletion of America’s Wetlands

Private land-use decisions can be affected dramatically by public investments in highways, waterways, flood control, or other infrastructure.  The large movement of jobs from central cities to suburbs in the postwar United States and the ongoing destruction of Amazon rain forests have occurred with major public investment in supporting infrastructure.  As these examples suggest, private land-use decisions can generate major environmental and social externalities – or, in common language, unintended consequences.

In an analysis that appeared in 1990 in the American Economic Review, Adam Jaffe of Brandeis University and I demonstrated that the depletion of forested wetlands in the Mississippi Valley – an important environmental problem and a North American precursor to the loss of South American rain forests – was exacerbated by Federal water-project investments, despite explicit Federal policy to protect wetlands.

Wetland Losses

Forested wetlands are among the world’s most productive ecosystems, providing improved water quality, erosion control, floodwater storage, timber, wildlife habitat, and recreational opportunities.  Their depletion globally is a serious problem; and preservation and protection of wetlands have been major Federal environmental policy goals for forty years.

From the 1950s through the mid-1970s, over one-half million acres of U.S. wetlands were lost each year.  This rate slowed greatly in subsequent years, averaging approximately 60 thousand acres lost per year in the lower 48 states from 1986 through 1997.  And by 2006, the Bush administration’s Secretary of the Interior, Gale Norton, was able to announce a net gain in wetland acreage in the United Sates, due to restoration and creation activities surpassing wetland losses.

What Caused the Observed Losses?

What were the causes of the huge annual losses of wetlands in the earlier years?  That question and our analysis are as germane today as in 1990, because of lessons that have emerged about the unintended consequences of public investments.

The largest remaining wetland habitat in the continental United States is the bottomland hardwood forest of the Lower Mississippi Alluvial Plain.  Originally covering 26 million acres in seven states, this resource was reduced to about 12 million acres by 1937.  By 1990, another 7 million acres had been cleared, primarily for conversion to cropland.

The owner of a wetland parcel faces an economic decision involving revenues from the parcel in its natural state (primarily from timber), costs of conversion (the cost of clearing the land minus the resulting forestry windfall), and expected revenues from agriculture.  Agricultural revenues depend on prices, yields, and, significantly, the drainage and flooding frequency of the land.  Needless to say, landowners typically do not consider the positive environmental externalities generated by wetlands; thus conversion may occur more often than is socially optimal.

Such externalities are the motivation for Federal policy aimed at protecting wetlands, as embodied in the Clean Water Act.  Nevertheless, the Federal government engaged in major public investment activities, in the form of U.S. Army Corps of Engineers and U.S. Soil Conservation Service flood-control and drainage projects, which appeared to make agriculture more attractive and thereby encourage wetland depletion.  The significance of this effect had long been disputed by the agencies which construct and maintain these projects; they attributed the extensive conversion exclusively to rising agricultural prices.

In an econometric (statistical) analysis of data from Arkansas, Mississippi, and Louisiana, from 1935 to 1984, Jaffe and I sought to sort out the effects of Federal projects and other economic forces.  We discovered that these public investments were a very substantial factor causing conversion of wetlands to agriculture, with between 30 and 50 percent of the total wetland depletion over those five decades due to the Federal projects.

More broadly, four conclusions emerged from our analysis.  First, landowners had responded to economic incentives in their land-use decisions.  Second, construction of Federal flood-control and drainage projects caused a higher rate of conversion of forested wetlands to croplands than would have occurred in the absence of projects, leading to the depletion of an additional 1.25 million acres of wetlands.  Third, Federal projects had this impact because they made agriculture feasible on land where it had previously been infeasible, and because, on average, they improved the quality of feasible land.  Fourth, adjustment of land use to economic conditions was gradual.

Government Working at Cross-Purposes

The analysis highlighted a striking inconsistency in the Federal government’s approach to wetlands.  In articulated policies, laws, and regulations, the government recognized the positive externalities associated with some wetlands, with the George H.W. Bush administration first enunciating a “no net loss of wetlands” policy.  But public investments in wetlands – in the form of flood-control and drainage projects – had created major incentives to convert these areas to alternative uses.  The government had been working at cross-purposes.

The conclusion that major public infrastructure investments affect private land-use decisions (thereby often generating negative externalities) may not be a surprise to some readers, but it was the 1990 analysis described here that first provided rigorous evidence which contrasted sharply with the accepted wisdom among policy makers.

The Ongoing Importance of Induced Land-Use Changes

As wetlands, tropical rain forests, barrier islands, and other sensitive environmental areas become more scarce, their marginal social value rises.  In general, if induced land-use changes are not considered, the country will engage in more public investment programs whose net social benefits are negative.

Share

Three Pillars of a New Climate Pact

THE climate change summit at the United Nations on Tuesday, September 22nd,  is aimed to build momentum for the 15th Conference of the Parties to the UN Framework Convention on Climate Change in Copenhagen in December, where nations will continue negotiations on a successor to the 1997 Kyoto Protocol, which expires in 2012.   Later this week, the G20 finance ministers will meet in Pittsburgh, Pennsylvania, where international climate policy will be high on the agenda.

In the midst of this, Professor Sheila Olmstead of Yale University and I wrote an opinion piece which appeared as an op-ed in The Boston Globe on Sunday, September 20th.  (See the original here, with the artwork; and/or for a detailed description of our proposal, see our discussion paper for the Harvard Project on International Climate Agreements.)

In the op-ed, we argued that to be successful, any feasible successor agreement must contain three essential elements: meaningful involvement by a broad set of key industrialized and developing nations; an emphasis on an extended time path of emissions targets; and inclusion of policy approaches that work through the market, rather than against it.

Consider the need for broad participation. Industrialized countries have emitted most of the stock of man-made carbon dioxide in our atmosphere, so shouldn’t they reduce emissions before developing countries are asked to contribute? While this seems to make sense, here are four reasons why the new climate agreement must engage all major emitting countries – both industrialized and developing.

First, emissions from developing countries are significant and growing rapidly. China surpassed the United States as the world’s largest CO2 emitter in 2006, and developing countries may account for more than half of global emissions within the next decade. Second, developing countries provide the best opportunities for low-cost emissions reduction; their participation could dramatically reduce total costs. Third, the United States and several other industrialized countries may not commit to significant emissions reductions without developing country participation. Fourth, if developing countries are excluded, up to one-third of carbon emissions reductions by participating countries may migrate to non-participating economies through international trade, reducing environmental gains and pushing developing nations onto more carbon-intensive growth paths (so-called “carbon leakage’’).

How can developing countries participate in an international effort to reduce emissions without incurring costs that derail their economic development? Their emissions targets could start at business-as-usual levels, becoming more stringent over time as countries become wealthier. If such “growth targets’’ were combined with an international emission trading program, developing countries could fully participate without incurring prohibitive costs (or even any costs in the short term).  (For a very insightful analysis of such growth targets, please see Harvard Professor Jeffrey Frankel‘s discussion paper for the Harvard Project on International Climate Agreements.)

The second pillar of a successful post-2012 climate policy is an emphasis on the long run. Greenhouse gases remain in the atmosphere for decades to centuries, and major technological change is needed to bring down the costs of reducing CO2 emissions. The economically efficient solution will involve firm but moderate short-term targets to avoid rendering large parts of the capital stock prematurely obsolete, and flexible but more stringent long-term targets.

Third, a post-2012 global climate policy must work through the market rather than against it. To keep costs down in the short term and bring them down even lower in the long term through technological change, market-based policy instruments must be embraced as the chief means of reducing emissions. One market-based approach, known as cap-and-trade, is emerging as the preferred approach for reducing carbon emissions among industrialized countries.

Under cap-and-trade, sources with low control costs may take on added reductions, allowing them to sell excess permits to sources with high control costs. The European Union’s Emission Trading Scheme, established under the Kyoto Protocol, is the world’s largest cap-and-trade system. In June, the US federal government took a significant step toward establishing a national cap-and-trade policy to reduce CO2 emissions, with the passage in the House of Representatives of the American Clean Energy and Security Act (about which I have written in many previous posts at this blog). Other industrialized countries are instituting or planning national CO2 cap-and-trade systems, including Australia, Canada, Japan, and New Zealand.

Linking such cap-and-trade systems under a new international climate treaty would bring cost savings from increasing the market’s scope, greater liquidity, reduced price volatility, lessened market power, and reduced carbon leakage. Cap-and-trade systems can be linked directly, which requires harmonization, or indirectly by linking with a common emissions-reduction credit system; indeed, this is what appears to be emerging even before a new agreement is forged. Kyoto’s Clean Development Mechanism allows parties in wealthy countries to purchase emissions-reduction credits in developing countries by investing in emissions-reduction projects. These credits can be used to meet emissions commitments within the EU-ETS, and other systems are likely to accept them as well.

Countries meeting in New York and Pittsburgh this week, and in Copenhagen in December, should consider these three essential elements as they negotiate a new climate agreement. A new international climate agreement missing any of these three pillars may be too costly, and provide too little benefit, to represent a meaningful attempt to address the threat of global climate change.

Share

Too Good to be True?

Global climate change is a serious environmental threat, and sound public policies are needed to address it effectively and sensibly.

There is now significant interest and activity within both the U.S. Administration and the U.S. Congress to develop a meaningful national climate policy in this country.  (If you’re interested, please see some of my previous posts:  “Opportunity for a Defining Moment” (February 6, 2009); “The Wonderful Politics of Cap-and-Trade:  A Closer Look at Waxman-Markey” (May 27, 2009); “Worried About International Competitiveness?  Another Look at the Waxman-Markey Cap-and-Trade Proposal” (June 18, 2009); “National Climate Change Policy:  A Quick Look Back at Waxman-Markey and the Road Ahead” (June 29, 2009).  For a more detailed account, see my Hamilton Project paper, A U.S. Cap-and-Trade System to Address Global Climate Change.)

And as we move toward the international negotiations to take place in December of this year in Copenhagen, it is important to keep in mind the global commons nature of the problem, and hence the necessity of designing and implementing an international policy architecture that is scientifically sound, economically rational, and politically pragmatic.

Back in the U.S., with domestic action delayed in the Senate, several states and regions in the United States have moved ahead with their own policies and plans.  Key among these is California’s Global Warming Solutions Act of 2006, intended to return the state’s greenhouse gas (GHG) emissions in 2020 to their 1990 level.  In 2006, three studies were released indicating that California can meet its 2020 target at no net economic cost.  That is not a typographical error.  The studies found not simply that the costs will be low, but that the costs will be zero, or even negative!  That is, the studies found that California’s ambitious target can be achieved through measures whose direct costs would be outweighed by offsetting savings they create, making them economically beneficial even without considering the emission reductions they may achieve.  Not just a free lunch, but a lunch we are paid to eat!

Given the substantial emission reductions that will be required to meet California’s 2020 target, these findings are ­- to put it mildly – surprising, and they differ dramatically from the vast majority of economic analyses of the cost of reducing GHG emissions.  As a result, I was asked by the Electric Power Research Institute – along with my colleagues, Judson Jaffe and Todd Schatzki of Analysis Group – to evaluate the three California studies.

In a report titled, “Too Good To Be True?  An Examination of Three Economic Assessments of California Climate Change Policy,” we found that although some limited opportunities may exist for no-cost emission reductions, the studies substantially underestimated the cost of meeting California’s 2020 target — by omitting important components of the costs of emission reduction efforts, and by overestimating offsetting savings some of those efforts yield through improved energy efficiency.  In some cases, the studies focused on the costs of particular actions to reduce emissions, but failed to consider the effectiveness and costs of policies that would be necessary to bring about those actions.  Just a few of the flaws we identified lead to underestimation of annual costs on the order of billions of dollars.  Sadly, the studies therefore did not and do not offer reliable estimates of the cost of meeting California’s 2020 target.

This episode is a reminder of a period when similar studies were performed by the U.S. Department of Energy at the time of the Kyoto Protocol negotiations.  Like the California studies, the DOE (Interlaboratory Work Group) studies in the late 1990s suggested that substantial emission reductions could be achieved at no cost.  Those studies were terribly flawed, which was what led to their faulty conclusions.  I had thought that such arguments about massive “free lunches” in the energy efficiency and climate domain had long since been laid to rest.  The debates in California (and some of the rhetoric in Washington) prove otherwise.

While the Global Warming Solutions Act of 2006 sets an emissions target, critical policy design decisions remain to be made that will fundamentally affect the cost of the policy.  For example, policymakers must determine the emission sources that will be regulated to meet those targets, and the policy instruments that will be employed.  The California studies do not directly address the cost implications of these and other policy design decisions, and their overly optimistic findings may leave policymakers with an inadequate appreciation of the stakes associated with the decisions that lie ahead.

On the positive side, a careful evaluation of the California studies highlights some important policy design lessons that apply regardless of the extent to which no-cost emission reduction opportunities really exist.  Policies should be designed to account for uncertainty regarding emission reduction costs, much of which will not be resolved before policies must be enacted.  Also, consideration of the market failures that lead to excessive GHG emissions makes clear that to reduce emissions cost-effectively, policymakers should employ a market-based policy (such as cap-and-trade) as the core policy instrument.

The fact that the three California studies so egregiously underestimated the costs of achieving the goals of the Global Warming Solutions Act should not be taken as indicating that the Act itself is necessarily without merit.  As I have discussed in previous posts, that judgment must rest – from an economic perspective – on an honest and rigorous comparison of the Act’s real benefits and real costs.

Share