What Baseball Can Teach Policymakers

With the Major League Baseball season having just begun, I’m reminded of the truism that the best teams win their divisions in the regular season, but the hot teams win in the post-season playoffs.  Why the difference?  The regular season is 162 games long, but the post-season consists of just a few brief 5-game and 7-game series.  And because of the huge random element that pervades the sport, in a single game (or a short series), the best teams often lose, and the worst teams often win.

The numbers are striking, and bear repeating.  In a typical year, the best teams lose 40 percent of their games, and the worst teams win 40 percent of theirs.  In the extreme, one of the best Major League Baseball teams ever ­- the 1927 New York Yankees – lost 29 percent of their games; and one of the worst teams in history – the 1962 New York Mets – won 25 percent of theirs.  On any given day, anything can happen.  Uncertainty is a fundamental part of the game, and any analysis that fails to recognize this is not only incomplete, but fundamentally flawed.

The same is true of analyses of environmental policies.  Uncertainty is an absolutely fundamental aspect of environmental problems and the policies that are employed to address those problems.  Any analysis that fails to recognize this runs the risk not only of being incomplete, but misleading as well.  Judson Jaffe, formerly at Analysis Group, and I documented this in a study published in Regulation and Governance.

To estimate proposed regulations’ benefits and costs, analysts frequently rely on inputs that are uncertain —  sometimes substantially so.  Such uncertainties in underlying inputs are propagated through analyses, leading to uncertainty in ultimate benefit and cost estimates, which constitute the core of a Regulatory Impact Analysis (RIA), required by Presidential Executive Order for all “economically significant” proposed Federal regulations.

Despite this uncertainty, the most prominently displayed results in RIAs are typically single, apparently precise point estimates of benefits, costs, and net benefits (benefits minus costs), masking uncertainties inherent in their calculation and possibly obscuring tradeoffs among competing policy options.  Historically, efforts to address uncertainty in RIAs have been very limited, but guidance set forth in the U.S. Office of Management and Budget’s (OMB) Circular A‑4 on Regulatory Analysis has the potential to enhance the information provided in RIAs regarding uncertainty in benefit and cost estimates.  Circular A‑4 requires the development of a formal quantitative assessment of uncertainty regarding a regulation’s economic impact if either annual benefits or costs are expected to reach $1 billion.

Over the years, formal quantitative uncertainty assessments — known as Monte Carlo analyses — have become common in a variety of fields, including engineering, finance, and a number of scientific disciplines, as well as in “sabermetrics” (quantitative, especially statistical analysis of professional baseball), but rarely have such methods been employed in RIAs.

The first step in a Monte Carlo analysis involves the development of probability distributions of uncertain inputs to an analysis.  These probability distributions reflect the implications of uncertainty regarding an input for the range of its possible values and the likelihood that each value is the true value.  Once probability distributions of inputs to a benefit‑cost analysis are established, a Monte Carlo analysis is used to simulate the probability distribution of the regulation’s net benefits by carrying out the calculation of benefits and costs thousands, or even millions, of times.  With each iteration of the calculations, new values are randomly drawn from each input’s probability distribution and used in the benefit and/or cost calculations.  Over the course of these iterations, the frequency with which any given value is drawn for a particular input is governed by that input’s probability distribution.  Importantly, any correlations among individual items in the benefit and cost calculations are taken into account.  The resulting set of net benefit estimates characterizes the complete probability distribution of net benefits.

Uncertainty is inevitable in estimates of environmental regulations’ economic impacts, and assessments of the extent and nature of such uncertainty provides important information for policymakers evaluating proposed regulations.  Such information offers a context for interpreting benefit and cost estimates, and can lead to point estimates of regulations= benefits and costs that differ from what would be produced by purely deterministic analyses (that ignore uncertainty).  In addition, these assessments can help establish priorities for research.

Due to the complexity of interactions among uncertainties in inputs to RIAs, an accurate assessment of uncertainty can be gained only through the use of formal quantitative methods, such as Monte Carlo analysis.  Although these methods can offer significant insights, they require only limited additional effort relative to that already expended on RIAs.  Much of the data required for these analyses are already obtained by EPA in their preparation of RIAs; and widely available software allows the execution of Monte Carlo analysis in common spreadsheet programs on a desktop computer.  In a specific application in the Regulation and Governance study, Jaffe and I demonstrate the use and advantages of employing formal quantitative analysis of uncertainty in a review of EPA’s 2004 RIA for its Nonroad Diesel Rule.

Formal quantitative assessments of uncertainty can mark a truly significant step forward in enhancing regulatory analysis under Presidential Executive Orders.  They have the potential to improve substantially our understanding of the impact of environmental regulations, and thereby to lead to more informed policymaking.

Share

The Making of a Conventional Wisdom

Despite the potential cost-effectiveness of market-based policy instruments, such as pollution taxes and tradable permits, conventional approaches –  including design and uniform performance standards – have been the mainstay of U.S. environmental policy since before the first Earth Day in 1970.  Gradually, however, the political process has become more receptive to innovative, market-based strategies.  In the 1980s, tradable-permit systems were used to accomplish the phasedown of lead in gasoline ­(at a savings of about $250 million per year), and to facilitate the phaseout of ozone-depleting chloroflourocarbons (CFCs); and in the 1990’s, tradable permits were used to implement stricter air pollution controls in the Los Angeles metropolitan region, and –  most important of all – a cap-and-trade system was adopted to reduce sulfur dioxide (SO2) emissions and consequent acid rain by 50 percent under the Clean Air Act amendments of 1990 (saving about $1 billion per year in abatement costs).  Most recently, cap-and-trade systems have emerged as the preferred national and regional policy instrument to address carbon dioxide (CO2) emissions linked with global climate change (see my previous posts of February 6th on an “Opportunity for a Defining Moment” and March 7th on “Green Jobs”).

Why has there been a relatively recent rise in the use of market-based approaches?  For academics like me, it would be gratifying to believe that increased understanding of market-based instruments had played a large part in fostering their increased political acceptance, but how important has this really been?  In 1981, my Harvard colleague, political scientist Steven Kelman surveyed Congressional staff members, and found that support and opposition to market-based environmental policy instruments was based largely on ideological grounds: Republicans, who supported the concept of economic-incentive approaches, offered as a reason the assertion that “the free market works,” or “less government intervention” is desirable, without any real awareness or understanding of the economic arguments for market-based programs.  Likewise, Democratic opposition was based largely upon ideological factors, with little or no apparent understanding of the real advantages or disadvantages of the various instruments.  What would happen if we were to replicate Kelman’s survey today?  My refutable hypothesis is that we would find increased support from Republicans, greatly increased support from Democrats, but insufficient improvements in understanding to explain these changes.  So what else has mattered?

First, one factor has surely been increased pollution control costs, which have led to greater demand for cost-effective instruments.  By the late 1980’s, even political liberals and environmentalists were beginning to question whether conventional regulations could produce further gains in environmental quality.  During the previous twenty years, pollution abatement costs had continually increased, as stricter standards moved the private sector up the marginal abatement-cost curve.  By 1990, U.S. pollution control costs had reached $125 billion annually, nearly a 300% increase in real terms from 1972 levels.

Second, a factor that became important in the late 1980’s was strong and vocal support from some segments of the environmental community.  By supporting tradable permits for acid rain control, the Environmental Defense Fund seized a market niche in the environmental movement, and successfully distinguished itself from other groups.  Related to this, a third factor was that the SO2 allowance trading program, the leaded gasoline phasedown, and the CFC phaseout were all designed to reduce emissions, not simply to reallocate them cost-effectively among sources. Market-based instruments are most likely to be politically acceptable when proposed to achieve environmental improvements that would not otherwise be achieved.

Fourth, deliberations regarding the SO2 allowance system, the lead system, and CFC trading differed from previous attempts by economists to influence environmental policy in an important way:  the separation of ends from means, that is, the separation of consideration of goals and targets from the policy instruments used to achieve those targets.  By accepting – implicitly or otherwise – the politically identified (and potentially inefficient) goal, the ten-million ton reduction of SO2 emissions, for example, economists were able to focus successfully on the importance of adopting a cost-effective means of achieving that goal.

Fifth, acid rain was an unregulated problem until the SO2 allowance trading program of 1990; and the same can be said for leaded gasoline and CFC’s.  Hence, there were no existing constituencies – in the private sector, the environmental advocacy community, or government – for the status quo approach, because there was no status quo approach.  We should be more optimistic about introducing market-based instruments for “new” problems, such as global climate change, than for existing, highly regulated problems, such as abandoned hazardous waste sites.

Sixth, by the late 1980’s, there had already been a perceptible shift of the political center toward a more favorable view of using markets to solve social problems.  The George H. W. Bush Administration, which proposed the SO2 allowance trading program and then championed it through an initially resistant Democratic Congress, was (at least in its first two years) “moderate Republican;” and phrases such as “fiscally responsible environmental protection” and “harnessing market forces to protect the environment” do have the sound of quintessential moderate Republican issues.  But, beyond this, support for market-oriented solutions to various social problems had been increasing across the political spectrum for the previous fifteen years, as was evidenced by deliberations on deregulation of the airline, telecommunications, trucking, railroad, and banking industries. Indeed, by the mid-1990s, the concept (or at least the phrase), “market-based environmental policy,” had evolved from being politically problematic to politically attractive.

Seventh and finally, the adoption of the SO2 allowance trading program for acid rain control – like any major innovation in public policy – can partly be attributed to a healthy dose of chance that placed specific persons in key positions, in this case at the White House, EPA, the Congress, and environmental organizations.  The result was what remains the golden era in the United States for market-based environmental strategies.

_____________________________________________________________________________________

If you would like to read more about the factors that have brought about the changes that have occurred in the political reception given to market-based environmental policy instruments over the past two decades, here are some references:

Stavins, Robert N.  “What Can We Learn from the Grand Policy Experiment? Positive and Normative Lessons from SO2 Allowance Trading.” Journal of Economic Perspectives, Volume 12, Number 3, pages 69-88, Summer 1998.

Keohane, Nathaniel O., Richard L. Revesz, and Robert N. Stavins.  “The Choice of Regulatory Instruments in Environmental Policy.” Harvard Environmental Law Review, volume 22, number 2, pp. 313-367, 1998.

Hahn, Robert W.  “The Impact of Economics on Environmental Policy.” Journal of Environmental Economics and Management 39(2000):375-399.

Hahn, Robert W., Sheila M. Olmstead, and Robert N. Stavins.  “Environmental Regulation During the 1990s: A Retrospective Analysis.” Harvard Environmental Law Review, volume 27, number 2, 2003, pp. 377-415.

Share

Moving Beyond Vintage-Differentiated Regulation

A common feature of many environmental policies in the United States is vintage-differentiated regulation (VDR), under which standards for regulated units are fixed in terms of the units’ respective dates of entry, with later vintages facing more stringent regulation.  In the most common application, often referred to as “grandfathering,” units produced prior to a specific date are exempted from a new regulation or face less stringent requirements.

As I explain in this post, an economic perspective suggests that VDRs are likely to retard turnover in the capital stock, and thereby to reduce the cost-effectiveness of regulation in the long-term, compared with equivalent undifferentiated regulations.  Further, under some conditions the result can be higher levels of pollutant emissions than would occur in the absence of regulation.  Thus, economists have long argued that age-discriminatory environmental regulations retard investment, drive up the cost of environmental protection, and may even retard pollution abatement.

Why have VDRs been such a common feature of U.S. regulatory policy, despite these problems?  Among the reasons frequently given are claims that VDRs are efficient and equitable.  These are not unreasonable claims.  In the short-term, it is frequently cheaper to control a given amount of pollution by adopting some technology at a new plant than by retrofitting that same or some other technology at an older, existing plant.  Hence, VDRs appear to be cost-effective, at least in the short term.  But this short-term view ignores the perverse incentive structure that such a time-differentiated standard puts in place.  By driving up the cost of abatement with new vintages of plant or technology relative to older vintages, investments (in plants and/or technologies) are discouraged.

In terms of equity, it may indeed appear to be fair or equitable to avoid changing the rules for facilities that have already been built or products that have already been manufactured, and to focus instead only on new facilities and products.  But, on the other hand, the distinct “lack of a level playing field” – an essential feature of any VDR – hardly appears equitable from the perspective of those facing the more stringent component of an age-differentiated regulation.

An additional and considerably broader explanation for the prevalence of VDRs is fundamentally political.  Existing firms seek to erect entry barriers to restrict competition, and VDRs drive up the costs for firms to construct new facilities.  And environmentalists may support strict standards for new sources because they represent environmental progress, at least symbolically.  Most important, more stringent standards for new sources allow legislators to protect existing constituents and interests by placing the bulk of the pollution control burden on unbuilt factories.

Surely the most prominent example of VDRs in the environmental realm is New Source Review (NSR), a set of requirements under the Clean Air Act that date back  to  the  1970s.  The lawyers and engineers who wrote the law thought they could secure faster environmental progress by imposing tougher emissions standards on new power plants (and certain other emission sources) than on existing ones.  The theory was that emissions would fall as old plants were retired and replaced by new ones.  But experience over the past 25 years has shown that this approach has been both excessively costly and environmentally counterproductive.

The reason is that it has motivated companies to keep old (and dirty) plants operating, and to hold back investments in new (and cleaner) power generation technologies.  Not only has New Source Review deterred investment in newer, cleaner technologies; it has also discouraged companies from keeping power plants maintained.  Plant owners contemplating maintenance activities have had to weigh the possible loss of considerable regulatory advantage if the work crosses a murky line between upkeep and new investment.  Protracted legal wrangling has been inevitable over whether maintenance activities have crossed a threshold sufficient to justify forcing an old plant to meet new plant standards.  Such deferral of maintenance has compromised the reliability of electricity generation plants, and thereby increased the risk of outages.

Research has demonstrated that the New Source Review process has driven up costs  tremendously (not just for the electricity companies, but for their customers and shareholders, that is, for all of us) and has resulted in worse environmental quality than would have occurred if firms had not faced this disincentive to invest in new, cleaner technologies.  In an article that appeared in 2006 in the Stanford Environmental Law Journal, I summarized and sought to synthesize much of the existing, relevant economic research.

The solution is a level playing field, where all electricity generators would have the same environmental requirements, whether plants are old or new.  A sound and simple approach would be to cap total pollution, and use an emissions trading system to assure that any emissions increases at one plant are balanced by offsetting reductions at another.  No matter how emissions were initially allocated across plants, the owners of existing plants and those who wished to build new ones would then face the correct incentives with respect to retirement decisions, investment decisions, and decisions regarding the use of alternative fuels and technologies to reduce pollution.

In this way, statutory environmental targets can be met in a truly cost-effective manner, that is, without introducing perverse incentives that discourage investment, drive up costs in the long run, and have counter-productive effects on environmental protection.

It is not only possible, but eminently reasonable to be both a strong advocate for  environmental protection and an advocate for the elimination of vintage differentiated regulations, such as New Source Review.  That is where an economic perspective and the available evidence leads.

Share