Is Benefit-Cost Analysis Helpful for Environmental Regulation?

With the locus of action on Federal climate policy moving this week from the House of Representatives to the Senate, this is a convenient moment to step back from the political fray and reflect on some fundamental questions about U.S. environmental policy.

One such question is whether economic analysis – in particular, the comparison of the benefits and costs of proposed policies – plays a truly useful role in Washington, or is it little more than a distraction of attention from more important perspectives on public policy, or – worst of all – is it counter-productive, even antithetical, to the development, assessment, and implementation of sound policy in the environmental, resource, and energy realms.   With an exceptionally talented group of thinkers – including scientists, lawyers, and economists – now in key environmental and energy policy positions at the White House, the Environmental Protection Agency, the Department of Energy, and the Department of the Treasury, this question about the usefulness of benefit-cost analysis is of particular importance.

For many years, there have been calls from some quarters for greater reliance on the use of economic analysis in the development and evaluation of environmental regulations.  As I have noted in previous posts on this blog, most economists would argue that economic efficiency — measured as the difference between benefits and costs — ought to be one of the key criteria for evaluating proposed regulations.  (See:  “The Myths of Market Prices and Efficiency,” March 3, 2009; “What Baseball Can Teach Policymakers,” April 20, 2009; “Does Economic Analysis Shortchange the Future?” April 27, 2009)  Because society has limited resources to spend on regulation, such analysis can help illuminate the trade-offs involved in making different kinds of social investments.  In this sense, it would seem irresponsible not to conduct such analyses, since they can inform decisions about how scarce resources can be put to the greatest social good.

In principle, benefit-cost analysis can also help answer questions of how much regulation is enough.  From an efficiency standpoint, the answer to this question is simple — regulate until the incremental benefits from regulation are just offset by the incremental costs.  In practice, however, the problem is much more difficult, in large part because of inherent problems in measuring marginal benefits and costs.  In addition, concerns about fairness and process may be very important economic and non-economic factors.  Regulatory policies inevitably involve winners and losers, even when aggregate benefits exceed aggregate costs.

Over the years, policy makers have sent mixed signals regarding the use of benefit-cost analysis in policy evaluation.  Congress has passed several statutes to protect health, safety, and the environment that effectively preclude the consideration of benefits and costs in the development of certain regulations, even though other statutes actually require the use of benefit-cost analysis.  At the same time, Presidents Carter, Reagan, Bush, Clinton, and Bush all put in place formal processes for reviewing economic implications of major environmental, health, and safety regulations. Apparently the Executive Branch, charged with designing and implementing regulations, has seen a greater need than the Congress to develop a yardstick against which regulatory proposals can be assessed.  Benefit-cost analysis has been the yardstick of choice

It was in this context that ten years ago a group of economists from across the political spectrum jointly authored an article in Science magazine, asking whether there is role for benefit-cost analysis in environmental, health, and safety regulation.  That diverse group consisted of Kenneth Arrow, Maureen Cropper, George Eads, Robert Hahn, Lester Lave, Roger Noll, Paul Portney, Milton Russell, Richard Schmalensee, Kerry Smith, and myself.  That article and its findings are particularly timely, with President Obama considering putting in place a new Executive Order on Regulatory Review.

In the article, we suggested that benefit-cost analysis has a potentially important role to play in helping inform regulatory decision making, though it should not be the sole basis for such decision making.  We offered eight principles.

First, benefit-cost analysis can be useful for comparing the favorable and unfavorable effects of policies, because it can help decision makers better understand the implications of decisions by identifying and, where appropriate, quantifying the favorable and unfavorable consequences of a proposed policy change.  But, in some cases, there is too much uncertainty to use benefit-cost analysis to conclude that the benefits of a decision will exceed or fall short of its costs.

Second, decision makers should not be precluded from considering the economic costs and benefits of different policies in the development of regulations.  Removing statutory prohibitions on the balancing of benefits and costs can help promote more efficient and effective regulation.

Third, benefit-cost analysis should be required for all major regulatory decisions. The scale of a benefit-cost analysis should depend on both the stakes involved and the likelihood that the resulting information will affect the ultimate decision.

Fourth, although agencies should be required to conduct benefit-cost analyses for major decisions, and to explain why they have selected actions for which reliable evidence indicates that expected benefits are significantly less than expected costs, those agencies should not be bound by strict benefit-cost tests.  Factors other than aggregate economic benefits and costs may be important.

Fifth, benefits and costs of proposed policies should be quantified wherever possible.  But not all impacts can be quantified, let alone monetized.  Therefore, care should be taken to assure that quantitative factors do not dominate important qualitative factors in decision making.  If an agency wishes to introduce a “margin of safety” into a decision, it should do so explicitly.

Sixth, the more external review that regulatory analyses receive, the better they are likely to be.  Retrospective assessments should be carried out periodically.

Seventh, a consistent set of economic assumptions should be used in calculating benefits and costs.  Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health.

Eighth, while benefit-cost analysis focuses primarily on the overall relationship between benefits and costs, a good analysis will also identify important distributional consequences for important subgroups of the population.

From these eight principles, we concluded that benefit-cost analysis can play an important role in legislative and regulatory policy debates on protecting and improving the natural environment, health, and safety.  Although formal benefit-cost analysis should not be viewed as either necessary or sufficient for designing sensible public policy, it can provide an exceptionally useful framework for consistently organizing disparate information, and in this way, it can greatly improve the process and hence the outcome of policy analysis.

If properly done, benefit-cost analysis can be of great help to agencies participating in the development of environmental regulations, and it can likewise be useful in evaluating agency decision making and in shaping new laws (which brings us full-circle to the climate legislation that will be developed in the U.S. Senate over the weeks and months ahead, and which I hope to discuss in future posts).

Share

Straight Talk about Corporate Social Responsibility

Critical thinking about “corporate social responsibility” (CSR) is needed, because there are few topics where discussions feature greater ratios of heat to light.  With this in mind, two of my Harvard colleagues – law professor Bruce Hay and business school professor Richard Vietor – and I co-edited a book, Environmental Protection and the Social Responsibility of Firms: Perspectives from Law, Economics, and Business.

At issue is the appropriate role of business with regard to environmental protection.  Everyone agrees that firms should obey the law. But beyond the law – beyond compliance with regulations – do firms have additional responsibilities to commit resources to environmental protection?  How should we think about the notion of firms sacrificing profits in the social interest?

Much of what has been written on this question has been both confused and confusing.  Advocates, as well as academics, have entangled what ought to be four distinct questions about corporate social responsibility:  may they, can they, should they, and do they.

First, may firms sacrifice profits in the social interest – given their fiduciary responsibilities to shareholders?  Does management have a fiduciary duty to maximize corporate profits in the interest of shareholders, or can it sacrifice profits by voluntarily exceeding the requirements of environmental law?  Einer Elhauge, a professor at Harvard Law School, challenges the conventional wisdom that managers have a simple legal duty to maximize corporate profits.  He argues that managers have freedom to diverge from the goal of profit maximization, partly because their legal duties to shareholders are governed by the “business judgment rule,” which gives them broad discretion to use corporate resources as they see fit.

If a company’s managers decide, for example, to use “green” inputs, devise cleaner production technologies, or dispose of their waste more safely, courts will not stop them from doing so, no matter how disgruntled shareholders may be at such acts of public charity.  The reason is that for all a judge knows, such measures – particularly when they are well publicized – will add to the firm’s bottom line in the long run by increasing public goodwill.  But this line of argument contradicts the very premise, since it is based upon the notion that the actions are not sacrificing profits, but contributing to them.

This leads directly to the second question.  Can firms sacrifice profits in the social interest on a sustainable basis, or will the forces of a competitive market render such efforts transient at best?  Paul Portney, Dean of the Eller College of Management at the University of Arizona, notes that for firms that enjoy monopoly positions or produce products for well-defined niche markets, such extra costs can well be passed on to customers.  But for the majority of firms in competitive industries – particularly firms that produce commodities – it is difficult or impossible to pass on such voluntarily incurred costs to customers.  Such firms have to absorb those extra costs in the form of reduced profits, reduced shareholder dividends, and/or reduced compensation, suggesting that, in the face of competition, such behavior is not sustainable.

This leads to the third question of CSR:  even if firms may carry out such profit-sacrificing activities, and can do so, should they – from society’s perspective?  Is this likely to lead to an efficient use of social resources?  To be more specific, under what conditions are firms’ CSR activities likely to be welfare-enhancing?  Portney finds that this is most likely to be the case if firms pursuing CSR strategies are doing so because it is good business – that is, profitable.  Once again, a positive response violates the premise of the question.  But for more costly CSR investments, concern exists about the opportunity costs that will be involved for firms. Further, in the case of companies that behave strategically with CSR to anticipate and shape future regulations, welfare may be reduced if the result is less stringent standards (that would have been justified).

Finally, do firms behave this way?  Do some firms reduce their earnings by voluntarily engaging in environmental stewardship?  Forest Reinhardt of the Harvard Business School addresses this question by surveying the performance of a broad cross-section of firms, and finds that only rarely does it pay to be green.  That said, situations do exist in which it does pay. Where one can increase customers’ willingness to pay, reduce one’s costs, manage future risk, or anticipate and defer costly governmental regulation, then it may pay to be green.  Overall, Reinhardt acknowledges the existence of these opportunities for some firms – examples such as Patagonia and DuPont stand out – but the empirical evidence does not support broad claims of pervasive opportunities.

So, where does this leave us?  May firms engage in CSR, beyond the law? An affirmative though conditional answer seems appropriate.  Can firms do so on a sustainable basis?  Outside of monopolies and limited niche markets, the answer is probably negative.  Should they carry out such beyond-compliance efforts, even when doing so is not profitable?  Here – if the alternative is sound and effective government policy – the answer may not be encouraging.  And the last question – do firms generally carry out such activities – seems to lead to a negative assessment, at least if we restrict our attention to real cases of “sacrificing profits in the social interest.”

But definitive answers to these questions await the results of rigorous, empirical research.  In the meantime, we ought to prevent muddled thinking by keeping separate these four questions of corporation social responsibility.

Share

A Tale of Two Taxes

Whether they are called “revenue enhancements” or “user charges,” fear of the political consequences of taxes restricts debate on energy and environmental policy options in Washington. In a March 7th post on “Green Jobs,” in which I argued that it is not always best to try to address two challenges with a single policy instrument, I also noted that in some cases such dual-purpose policy instruments can be a good idea, and I gave gasoline taxes as an example.

Although a serious recession is clearly not the time to expect political receptivity to such a proposal, the time will come — we all hope very soon — when the economy turns around, employment rises, and a sustained period of economic growth ensues. When that happens, serious consideration should be given to increases in the Federal tax on gasoline.

A gas tax increase — coupled with an offsetting reduction in other taxes, such as the Social Security tax on wages — could make most American households better off, while reducing oil imports, local pollution, urban congestion, road accidents, and global climate change. This revenue-neutral tax reform would exemplify the market-based approaches to environmental protection and resource management I examined in previous posts.

Such a change need not constitute a new tax, but a reform of existing ones. It is well known ­– both from economic theory and numerous empirical studies ­– that taxes tend to reduce the extent to which people undertake the taxed activity. In the United States, most tax revenues are raised by levies on labor and investment; the resulting reduction in these fundamentally desirable activities is viewed as an unfortunate but unavoidable side-effect of the need to raise revenue for government operations. Would it not make more sense to raise the revenue we need by taxing undesirable activities, instead of desirable ones?

Combustion of gasoline in motor vehicles produces local air pollution as well as carbon dioxide that contributes to global climate change, increases imports of oil, and exacerbates urban highway congestion. Can anyone really claim that — given a choice between discouraging work and discouraging gasoline consumption — it is better to discourage work?

According to the U.S. Department of Energy, a 50 cent gas tax increase could eventually reduce gasoline consumption by 10 to 15%, reduce oil imports by perhaps 500 thousand barrels per day, and generate about $40 billion per year in revenue.

Furthermore, this approach would be far more effective than on-going proposals to increase the Corporate Average Fuel Economy (CAFE) standards, which affect only new vehicles and lead to serious safety problems by encouraging auto makers to produce lighter vehicles. Also, remember that a major effect of CAFE standards has been to accelerate the shift from cars to SUVs and light trucks (so that overall fuel efficiency of new vehicles sold is no better than it was a decade ago, despite the great strides that have taken place in fuel efficiency technologies). As my Harvard colleague Martin Feldstein pointed out in The Wall Street Journal in 2006, the conventional approach “does nothing to encourage individuals to drive less, to use their cars more efficiently, or to shift sooner to new and more fuel efficient [and cleaner] vehicles.” A more enlightened approach ­— a market-based approach — would reward consumers who economize on gasoline use. And that is what a revenue-neutral gas tax is all about.

The revenue from the gas tax could be transferred to the Social Security Trust Fund and credited to current workers. If $40 billion per year from new gas tax revenues were transferred to Social Security, the payroll tax — the employee contribution to Social Security — could be cut by perhaps a third: a worker with annual wages of $30,000 would take home an additional $750 per year! The extra income would more than offset the cost of the gas tax, unless the worker drove over 35,000 miles per year in a car getting 25 miles or less per gallon. Rebating the gas tax in this way addresses the greatest concern about higher gas taxes — that they can hit hardest those workers who drive to their jobs. Further, a tax of this magnitude could be phased in gradually, perhaps no more than 10 cents per year over 5 years, allowing individuals and firms to adjust their consuming and producing behavior.

Proposals for gasoline tax increases in recent sessions of Congress would have dedicated the revenue to public spending (for transportation and other programs). A key difference is that the proposal I have outlined here is for a revenue-neutral change in which the gas tax revenue would be returned to Americans through reduced payroll taxes. To adopt some of the language I developed in my previous posts, such a change can be both efficient and equitable, and — for those reasons — perhaps even politically feasible.

Of course, such a scheme is not a panacea for U.S. energy and environmental problems. But it would make a significant contribution if enacted. On the other hand, political fear of the T-word in Washington may mean that it is never discussed seriously in public, let alone adopted. Most fear of taxes is due to politicians’ anxieties about asking their constituents to pay more. But an increase in the Federal gas tax, rebated through reduced payroll taxes would not cost most Americans any more and would have significant long-term benefits for the country. Still, fear of the T-word looms large; maybe it should be called an “All-American Ecologically Sound, Fully Recyclable, Anti-Terror, Energy-Independence Assessment.”

Share