Does economic analysis shortchange the future?

Decisions made today usually have impacts both now and in the future. In the environmental realm, many of the future impacts are benefits, and such future benefits — as well as costs — are typically discounted by economists in their analyses.  Why do economists do this, and does it give insufficient weight to future benefits and thus to the well-being of future generations?

This is a question my colleague, Lawrence Goulder, a professor of economics at Stanford University, and I addressed in an article in Nature.  We noted that as economists, we often encounter skepticism about discounting, especially from non-economists. Some of the skepticism seems quite valid, yet some reflects misconceptions about the nature and purposes of discounting.  In this post, I hope to clarify the concept and the practice.

It helps to begin with the use of discounting in private investments, where the rationale stems from the fact that capital is productive ­– money earns interest.  Consider a company trying to decide whether to invest $1 million in the purchase of a copper mine, and suppose that the most profitable strategy involves extracting the available copper 3 years from now, yielding revenues (net of extraction costs) of $1,150,000. Would investing in this mine make sense?  Assume the company has the alternative of putting the $1 million in the bank at 5 per cent annual interest. Then, on a purely financial basis, the company would do better by putting the money in the bank, as it will have $1,000,000 x (1.05)3, or $1,157,625, that is, $7,625 more than it would earn from the copper mine investment.

I compared the alternatives by compounding to the future the up-front cost of the project. It is mathematically equivalent to compare the options by discounting to the present the future revenues or benefits from the copper mine. The discounted revenue is $1,150,000 divided by (1.05)3, or $993,413, which is less than the cost of the investment ($1 million).  So the project would not earn as much as the alternative of putting the money in the bank.

Discounting translates future dollars into equivalent current dollars; it undoes the effects of compound interest. It is not aimed at accounting for inflation, as even if there were no inflation, it would still be necessary to discount future revenues to account for the fact that a dollar today translates (via compound interest) into more dollars in the future.

Can this same kind of thinking be applied to investments made by the public sector?  Since my purpose is to clarify a few key issues in the starkest terms, I will use a highly stylized example that abstracts from many of the subtleties.  Suppose that a policy, if introduced today and maintained, would avoid significant damage to the environment and human welfare 100 years from now. The ‘return on investment’ is avoided future damages to the environment and people’s well-being. Suppose that this policy costs $4 billion to implement, and that this cost is completely borne today.  It is anticipated that the benefits – avoided damages to the environment – will be worth $800 billion to people alive 100 years from now.  Should the policy be implemented?

If we adopt the economic efficiency criterion I have described in previous posts, the question becomes whether the future benefits are large enough so that the winners could potentially compensate the losers and still be no worse off?  Here discounting is helpful. If, over the next 100 years, the average rate of interest on ordinary investments is 5 per cent, the gains of $800 billion to people 100 years from now are equivalent to $6.08 billion today.  Equivalently, $6.08 billion today, compounded at an annual interest rate of 5 per cent, will become $800 billion in 100 years. The project satisfies the principle of efficiency if it costs current generations less than $6.08 billion, otherwise not.

Since the $4 billion of up-front costs are less than $6.08 billion, the benefits to future generations are more than enough to offset the costs to current generations. Discounting serves the purpose of converting costs and benefits from various periods into equivalent dollars of some given period.  Applying a discount rate is not giving less weight to future generations’ welfare.  Rather, it is simply converting the (full) impacts that occur at different points of time into common units.

Much skepticism about discounting and, more broadly, the use of benefit-cost analysis, is connected to uncertainties in estimating future impacts. Consider the difficulties of ascertaining, for example, the benefits that future generations would enjoy from a regulation that protects certain endangered species. Some of the gain to future generations might come in the form of pharmaceutical products derived from the protected species. Such benefits are impossible to predict. Benefits also depend on the values future generations would attach to the protected species – the enjoyment of observing them in the wild or just knowing of their existence. But how can we predict future generations’ values?  Economists and other social scientists try to infer them through surveys and by inferring preferences from individuals’ behavior.  But these approaches are far from perfect, and at best they indicate only the values or tastes of people alive today.

The uncertainties are substantial and unavoidable, but they do not invalidate the use of discounting (or benefit-cost analysis).  They do oblige analysts, however, to assess and acknowledge those uncertainties in their policy assessments, a topic I discussed in my last post (“What Baseball Can Teach Policymakers”), and a topic to which I will return in the future.

Share

Moving Beyond Vintage-Differentiated Regulation

A common feature of many environmental policies in the United States is vintage-differentiated regulation (VDR), under which standards for regulated units are fixed in terms of the units’ respective dates of entry, with later vintages facing more stringent regulation.  In the most common application, often referred to as “grandfathering,” units produced prior to a specific date are exempted from a new regulation or face less stringent requirements.

As I explain in this post, an economic perspective suggests that VDRs are likely to retard turnover in the capital stock, and thereby to reduce the cost-effectiveness of regulation in the long-term, compared with equivalent undifferentiated regulations.  Further, under some conditions the result can be higher levels of pollutant emissions than would occur in the absence of regulation.  Thus, economists have long argued that age-discriminatory environmental regulations retard investment, drive up the cost of environmental protection, and may even retard pollution abatement.

Why have VDRs been such a common feature of U.S. regulatory policy, despite these problems?  Among the reasons frequently given are claims that VDRs are efficient and equitable.  These are not unreasonable claims.  In the short-term, it is frequently cheaper to control a given amount of pollution by adopting some technology at a new plant than by retrofitting that same or some other technology at an older, existing plant.  Hence, VDRs appear to be cost-effective, at least in the short term.  But this short-term view ignores the perverse incentive structure that such a time-differentiated standard puts in place.  By driving up the cost of abatement with new vintages of plant or technology relative to older vintages, investments (in plants and/or technologies) are discouraged.

In terms of equity, it may indeed appear to be fair or equitable to avoid changing the rules for facilities that have already been built or products that have already been manufactured, and to focus instead only on new facilities and products.  But, on the other hand, the distinct “lack of a level playing field” – an essential feature of any VDR – hardly appears equitable from the perspective of those facing the more stringent component of an age-differentiated regulation.

An additional and considerably broader explanation for the prevalence of VDRs is fundamentally political.  Existing firms seek to erect entry barriers to restrict competition, and VDRs drive up the costs for firms to construct new facilities.  And environmentalists may support strict standards for new sources because they represent environmental progress, at least symbolically.  Most important, more stringent standards for new sources allow legislators to protect existing constituents and interests by placing the bulk of the pollution control burden on unbuilt factories.

Surely the most prominent example of VDRs in the environmental realm is New Source Review (NSR), a set of requirements under the Clean Air Act that date back  to  the  1970s.  The lawyers and engineers who wrote the law thought they could secure faster environmental progress by imposing tougher emissions standards on new power plants (and certain other emission sources) than on existing ones.  The theory was that emissions would fall as old plants were retired and replaced by new ones.  But experience over the past 25 years has shown that this approach has been both excessively costly and environmentally counterproductive.

The reason is that it has motivated companies to keep old (and dirty) plants operating, and to hold back investments in new (and cleaner) power generation technologies.  Not only has New Source Review deterred investment in newer, cleaner technologies; it has also discouraged companies from keeping power plants maintained.  Plant owners contemplating maintenance activities have had to weigh the possible loss of considerable regulatory advantage if the work crosses a murky line between upkeep and new investment.  Protracted legal wrangling has been inevitable over whether maintenance activities have crossed a threshold sufficient to justify forcing an old plant to meet new plant standards.  Such deferral of maintenance has compromised the reliability of electricity generation plants, and thereby increased the risk of outages.

Research has demonstrated that the New Source Review process has driven up costs  tremendously (not just for the electricity companies, but for their customers and shareholders, that is, for all of us) and has resulted in worse environmental quality than would have occurred if firms had not faced this disincentive to invest in new, cleaner technologies.  In an article that appeared in 2006 in the Stanford Environmental Law Journal, I summarized and sought to synthesize much of the existing, relevant economic research.

The solution is a level playing field, where all electricity generators would have the same environmental requirements, whether plants are old or new.  A sound and simple approach would be to cap total pollution, and use an emissions trading system to assure that any emissions increases at one plant are balanced by offsetting reductions at another.  No matter how emissions were initially allocated across plants, the owners of existing plants and those who wished to build new ones would then face the correct incentives with respect to retirement decisions, investment decisions, and decisions regarding the use of alternative fuels and technologies to reduce pollution.

In this way, statutory environmental targets can be met in a truly cost-effective manner, that is, without introducing perverse incentives that discourage investment, drive up costs in the long run, and have counter-productive effects on environmental protection.

It is not only possible, but eminently reasonable to be both a strong advocate for  environmental protection and an advocate for the elimination of vintage differentiated regulations, such as New Source Review.  That is where an economic perspective and the available evidence leads.

Share

The Myths of Market Prices and Efficiency

In my two previous posts I described a pair of prevalent myths regarding how economists think about the environment: “the myth of the universal market” ­– the notion that economists believe that the market solves all problems; and “the myth of simple market solutions” — the notion that economists always recommend simple market solutions for social problems. In response to those two myths, I noted that in the environmental domain, perfectly functioning markets are the exception, not the rule; and that no particular form of government intervention is appropriate for all environmental problems.

A third myth is that when non-market solutions are considered, economists use only market prices to evaluate them. No matter what policy instrument is chosen, the environmental goal must be identified. Should vehicle emissions be reduced by 10, 20, or 50 percent? Economists frequently try to identify the most efficient degree of control — that which provides the greatest net benefits. This means that both benefits and costs need to be evaluated. True enough, economists typically favor using market prices whenever possible to carry out such evaluations, because these prices reveal how people actually value scarce amenities and resources. Economists are wary of asking people how much they value something, because respondents may not provide honest assessments of their own valuations. Instead, economists prefer to “watch what they do, not what they say,” as when individuals reveal their preferences by paying more for a house in a neighborhood with cleaner air, all else equal.

But economists are not concerned only with the financial value of things. Far from it. The financial flows that make up the gross national product represent only a fraction of all economic flows. The scope of economics encompasses the allocation and use of all scarce resources. For example, the economic value of the human-health damages of environmental pollution is greater than the sum of health-care costs and lost wages (or lost productivity), as it includes what lawyers call “pain and suffering.” Economists might use a market price indirectly to measure revealed rather than stated preferences, but the goal is to measure the total value of the loss that individuals incur.

For another example, the economic value of some parcel of the Amazon rain forest is not limited to its financial value as a repository of future pharmaceutical products or as a location for ecotourism. Such “use value” may only be a small part of the properly defined economic valuation. For decades, economists have recognized the importance of “non-use value” of environmental amenities such as wilderness areas or endangered species. The public nature of these goods makes it particularly difficult to quantify the values empirically, as we cannot use market prices. Benefit-cost analysis of environmental policies, almost by definition, cannot rely exclusively on market prices.

Economists try to convert all of these disparate values into monetary terms because a common unit of measure is needed in order to add them up. How else can we combine the benefits of ten extra miles of visibility plus some amount of reduced morbidity, and then compare these total benefits with the total cost of installing scrubbers to clean stack gases at coal-fired power plants? Money, after all, is simply a medium of exchange, a convenient way to compare disparate goods and services. The dollar in a benefit-cost analysis is nothing more than a yardstick for measurement and comparison.

A fourth and final myth is that economic analyses are concerned only with efficiency rather than distribution. Many economists do give more attention to aggregate social welfare than to the distribution of the benefits and costs of policies among members of society. The reason is that an improvement in economic efficiency can be determined by a simple and unambiguous criterion C an increase in total net benefits. What constitutes an improvement in distributional equity, on the other hand, is inevitably the subject of much dispute. Nevertheless, many economists do analyze distributional issues thoroughly. Although benefit-cost analyses often emphasize the overall relation between benefits and costs, many analyses also identify important distributional consequences. Indeed, within the realm of global climate change policy, much of the economic analysis is dedicated to assessing the distributional implications of alternative policy measures.

So where does this leave us? First, economists do not believe that the market solves all problems. Indeed, many economists make a living out of analyzing Amarket failures@ such as environmental pollution in which laissez faire policy leads not to social efficiency, but to inefficiency. Second, when economists identify market problems, their tendency is to consider the feasibility of market solutions because of their potential cost-effectiveness, but market-based approaches to environmental protection are no panacea. Third, when market or non-market solutions to environmental problems are assessed, economists do not limit their analysis to financial considerations, but use monetary equivalents in benefit-cost calculations in the absence of a more convenient unit. Fourth and finally, although the efficiency criterion is by definition aggregate in nature, economic analysis can reveal much about the distribution of the benefits and the costs of environmental policies.

Having identified and sought to dispel four prevalent myths about how economists think about the natural environment, I want to acknowledge that my profession bears some responsibility for the existence of such misunderstandings about economics. Like our colleagues in the other social and natural sciences, academic economists focus their greatest energies on communicating to their peers within their own discipline. Greater effort can certainly be given by economists to improving communication across disciplinary boundaries. And that is one of my key goals in this blog in the weeks and months ahead.

Share