Tuesday, January 08, 1991

Victoria snowed by Christmas cheer

SANTA came early to Victoria, in the form of Dr Peter Sheehan, former head of the Victorian Treasury.  His article (AFR, December 21) had a cheerful Christmas message:  that all Victorians need is a little confidence and their economic woes will disappear.

Santa Sheehan reasoned that (a) Over the past six or seven years the Victorian public sector has been made ruthlessly efficient by the present government;  (b) a few isolated incidents (Trico, Pyramid) were blown out of all proportion by proponents of "current orthodoxies" to undermine what was otherwise a success story in financial management;  and (c) that caused a pervasive pessimism in the State.

Unfortunately, like Santa, Dr Sheehan's arguments belong to the world of make-believe.

Consider the following myths put forward in his article:

1. The 1980s was a very strong period for Victoria.

Sheehan repeats the oft-quoted boast of the present Victorian Government that the economy has out-performed other States during its period on office.  Wrong.  While Victoria was the lead State in terms of output per head when the Government was elected, growth below the all-State average meant that by 1989-90 it had fallen into second place behind Western Australia, with NSW catching up quickly.

Sheehan pointed to the strong growth in private business investment as evidence of underlying strength.  But while total private investment has been strong over the period, this has been concentrated in the construction component, which grew at an average annual rate of 21 per cent.  In other words, Victoria has been a leader in the massive speculation boom in commercial property, the relics of which will be dotting Melbourne's skyline for years to come.  A claim to leadership indeed;  but hardly a proud one.

Meanwhile, the critical area of plant and equipment investment, so important for the health of Victoria's large manufacturing sector, has grown at an average rate of less than 10 per cent since 1982-83, and has fallen by 8 per cent in 1989-90.  The outlook for total investment is bleak:  once current construction projects are completed there will be a dramatic fall in investment on buildings and structures, with no sign of any upsurge in plant and equipment investment to take its place.

2. This Government transformed the public sector into a lean, efficient organisation by ruthlessly imposing spending restraint.

This claim is based on selective use of statistics.  Dr Sheehan supports it by pointing out that growth in total public sector outlays was the lowest of all the States over the period 1983-84 to 1989-90.  Given that the Labor Government was elected in March 1982, and introduced its first Budget five months later, it is strange that he uses 1983-84 as a base period for comparison: surely a more apt starting point would be 1981-82, the last year of the previous government.

To return to my earlier point, if we look at expenditure trends in 1982-83 and 1983-94, it is little wonder Dr Sheehan avoided including them in his analysis.  In 1982-83, total outlays increased by a massive 17.5 per cent, and by a further 10.6 per cent the following year.

1983-84 thus has the advantage of being an inflated starting point from which to measure spending levels of the Victorian Government.

This statistical sleight-of-hand changes the picture considerably:  while real Victorian public sector outlays grew at 2.5 per cent per annum over the six years chosen by Dr Sheehan, it grew by 3.4 per cent over the eight years from 1981-82.

Even ignoring this "massaging" of the data, the claimed "restraint" from 1983-84 onwards is entirely a reflection of the fact that Victoria was the only State in which public sector capital outlays actually fell in nominal terms over the period 1983-84 to 1989-90.  Current outlays -- the real test of expenditure restraint -- increased at an average annual rate of around 10 cent per annum, above the all-State average.

It is relevant that from July 1983 to June 1990, State public sector employment in Victoria grew by 12.8 per cent, higher than for any other State.  This is hardly consistent with the notion of a public sector being "... squeezed harder by the Victorian Government than by any other Australian Government since the war".  But, even if it were true, the fact remains that the Victorian public sector is operating from a higher spending base than other States.

3. Victoria is a low tax State.

The other side of the coin to expenditure is revenue raising.  Sheehan firmly rejects the notion that high taxation is another dimension of the Victorian malaise.

However, on the basis of taxation defined by the Commonwealth Grants Commission for the purposes of framing its recommendations on how to allocate general revenue grants to the States, by far the most objective test, Victoria is clearly the highest taxing State.  The commission's estimates show that in 1988-89, before the big tax rise of this financial year, Victoria raised 11 per cent more per head than if taxes were imposed at the average rates for the six States.

4. Victoria does not have a debt problem.

This is the most worrying aspect of the Sheehan analysis.  He argues that the public debt level is not a problem because "... as a proportion of Gross State Product [it] is almost as low as it has ever been this century".  Whatever the truth of this statement, it does not resolve the debt question which, as Moody's pointed out in down-rating Victoria's $A denominated debt, revolves around the burden of servicing debt.

At the start of the 1980s real interest rates turned sharply positive and have remained high ever since.  Moreover, an increasing proportion of Victorian borrowing has been used for non-income earning capital expenditure.  Accordingly, we now have the situation in Victoria where 20 cents in every dollar raised in revenue goes towards paying interest on borrowings, let alone actually repaying the principal.  This is double the average for all other States.  In fact, public sector interest payments have nearly doubled on a per capita basis over the period 1982-83 to 1989-90, from $334 to $640. The average for all other States in 1989-90 was $358.

Dr Sheehan is correct in identifying a confidence problem in Victoria.  But the correction of that problem requires more than a simple injection of Christmas spirit to convince Victorians that after all their State has been well managed.

The Government needs to demonstrate that it has the political will to overcome Victoria's fiscal problems by bringing recurrent spending under control, by further lifting returns on public capital and by privatising public enterprises as quickly as the capital market will absorb them.

It also needs to create an environment more conducive to productive business investment by reducing both the role of unions and the regulation of business activity by government.

A report commissioned by the Government itself in 1989, but suppressed until recently obtained by the Opposition, says that laws covering labour conditions and unions are the greatest hindrance to the development of industry "with union representatives making unrealistic demands".

Until the Government redresses the imbalance against the private sector, Santa's bundle likely to remain dropped at the bottom of the chimney and the Victorian economy will continue to slip relative to other States.


ADVERTISEMENT

Wednesday, January 02, 1991

The enhanced greenhouse effect

EXECUTIVE SUMMARY

Theories that increased emissions of gases attributable to mankind's activities are causing global warming have rapidly assumed major prominence.  As the upper atmosphere cannot be owned there are no mechanisms stemming from individual property rights which would allow its use to be properly valued.  There are, therefore, no automatic mechanisms to allow careful stewardship of the resource.  Hence there are concerns that any action to combat global warming may be paralysed by an inability to exercise controls.

What is certain is that there has been a build up of carbon dioxide and the CFC family of gases since industrialisation and the growth in human populations began to accelerate.  Nonetheless, the facts are obscure.  Not the least of these is whether or not, despite the increase in gases with a capacity to retain terrestrial heat, significant warming will actually take place.  There are offsetting phenomena like ocean absorption and cloud formation which can easily counteract the estimated effects.  Further, if global warming were to take place, it is uncertain whether or not it would bring net harmful effects.  However, many countries, including Australia and New Zealand, have already adopted some limited commitment to reducing their emissions of greenhouse gases and there is pressure to take further steps.

This chapter examines some of the evidence concerning the issue.  It draws attention both to deficiencies in the theory which seeks to explain the phenomenon, and analyses the evidence that warming might be taking place.

The chapter goes on to outline procedures which might be used to reduce greenhouse effects, should this prove to be a wise course of action.  It draws attention to market-based instruments, like tradeable rights to emit greenhouse gases and taxation measures, as being considerably more efficient than governmental restrictions and requirements to use particular technological solutions.  Market-based approaches enlist greater knowledge about potentially cheaper means of achieving reduced emissions and incorporate a flexibility which is absent in heavy handed regulatory measures.


INTRODUCTION

Global warming has taken centre stage among environmental issues of the 1990s.  Average global temperatures are thought to be rising because of increased concentrations of greenhouse gases.  These gases act as a blanket around the earth modifying its energy balance.  Over the last 200 years, man's activities have raised concentrations of the naturally occurring gases, CO2, methane, and nitrous oxide, bringing about higher levels of retained heat.  New man-made gases, with a similar potential effect -- such as chlorofluorocarbons (CFCs) and the halon class of gases -- have also been released into the atmosphere.  CFCs and halons are also thought to be responsible for depletion of upper atmospheric ozone levels.

If it is occurring, global warming bears all the marks of a classic environmental problem.  No-one owns the upper atmosphere, and the individuals, industries, flora and fauna discharging gases into the atmosphere pay no price for the privilege.  With open access to all, Hardin's The Tragedy of the Commons' seems inevitable (Hardin, 1968).  Each individual source of greenhouse gas generates a benefit larger than its private costs.  In contrast, the costs each source avoids incurring directly are thought to be large and widely spread.  Presaged on these assumptions, there are calls for an internationally coordinated programme of greenhouse gas reductions.

In 1988, a conference in Toronto, with delegates from 46 countries, suggested setting a target of a 20% reduction in emissions of the major greenhouse gas, carbon dioxide, by the year 2005.  There is now a growing group of nations who have responded to this suggestion and moved towards setting greenhouse targets.  However, only Sweden has given legislative commitment to a target.  West Germany is aiming at a 25% cut.  Denmark, East Germany, New Zealand, Austria and Italy are moving towards 20% targets.  At least nine other countries, including Australia, Canada, the UK and Japan, have indicated some commitment to stopping the growth of greenhouse gases.

There are three dimensions to the greenhouse gas and global warming debate:

  • determining the facts;
  • measuring the effect;
  • setting out policy options.

DETERMINING THE FACTS

While large increases in greenhouse gas emissions have indisputably occurred since the industrial revolution, the evidence for any resulting global warming is at best scanty and at worst contradictory.  Theoretically, there is a case for expecting global warming but conclusive empirical evidence to support the hypothesis has not been found.  Without such evidence the phenomena remains a theory, no more plausible than the impending ice age which was being predicted by some climatologists in the 1970s.


MEASURING THE EFFECTS

Assuming significant global warming has or will occur, it could have both positive and negative effects.  Emphasis has been placed on the negative effects -- higher average, global temperatures and rising sea levels;  but there are also positive effects, including longer growing seasons and increased precipitation.  In addition, increased concentrations of carbon dioxide can be beneficial through accelerating plant growth and lowering water requirements for crops.  If the greenhouse phenomena proves well founded, both positive and negative implications must be assessed before deciding that it poses a potential problem.


POLICY OPTIONS

The difficulties associated with defining rights to the global atmosphere suggest some form of government intervention could be required to overcome any global warming problem.  These policy prescriptions could be either adaptive or pre-emptive.  If the problem is expected to be large and the effects certain, the preferred approach is likely to be some form of pre-emptive action to reduce emissions of the greenhouse gases.  The instruments to achieve this could include carbon taxes or tradeable emission quotas.  However, if the effects of the problem are relatively minor and the costs of taking early action to mitigate these are high, a more adaptive approach would be preferable, with governments and markets responding to the effects as they become apparent.

Policy decisions should be strongly influenced by two guidelines.  First, decisions should be based on measurements of the costs and benefits of different levels of increased greenhouse gas (GHG) emissions.  GHGs are the result of productive processes which yield a stream of income, thereby enhancing the quality of life.  Consideration of afreeze or cuts in GHG emissions must take these benefits into account.  Logically, the preferred response should be to allow additional increases in GHGs up to the level where the potential costs outweigh any gains.  Policy should not generally be polarised into all or nothing choices.  A full range of incremental options should be considered.

A second guideline for policy is that, where possible, market mechanisms should be used in preference to legislating specific technologies and productive processes.  If a pre-emptive policy stance is to be taken, establishing trade able quotas or, failing that, carbon taxes is preferable to requiring specific energy efficient or carbon dioxide minimising technologies.  Allowing the market to determine the means of achieving some national standard gives industry the flexibility to choose the most cost effective methods.  Assigning tradeable quotas to set levels of emissions would allow firms which are able to reduce their emissions at least cost to over achieve and be compensated by firms which find reductions more expensive.  A carbon tax, set at a rate to achieve the same level of emissions, would also allow flexibility in technology choice (although not the additional cost saving to be found when firms may trade emissions).

A number of more market oriented policies, which are sensible in their own right, could also reduce GHG emissions.  For example, the implementation of efficient pricing mechanisms by state owned electricity corporations may, by itself, also go some way towards achieving lower GHG emissions.


IMPLICATIONS OF A GLOBAL ENVIRONMENTAL PROBLEM

The global nature of the greenhouse phenomena distinguishes it from other environmental and resource related problems discussed in this book.  If global warming is a problem, its solution would seem to require very wide cooperation.  Large and certain costs are a high price to pay for benefits which, for any single emitting nation, would be shared among all and would most certainly be small and highly uncertain.  Hence, the incentives for an individual nation to renege on internationally agreed targets are high.  As a result of this, in the past, wide and costly actions by sovereign states in pursuit of goals with joint pay-offs have proven difficult to achieve.  However, confidence in the possibilities of global cooperation and multi-lateral treaties to counteract global climate change received considerable impetus with the success of the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer.  The Montreal Protocol has been ratified by many countries, including Australia (in 1989).  It requires phasing out of CFCs -- used in a wide range of applications including refrigeration and aerosol propellants.

The success of the Montreal Protocol has encouraged the call for an international treaty imposing global limits on GHGs.  However, any such treaty will have far larger ramifications than the phasing out of CFCs.  Global limits on GHGs would have major and direct implications for the energy and transport sectors of national economies.  Both these sectors are major sources of carbon dioxide emissions, which is the GHG causing most concern.  All sectors would be indirectly affected because of the widespread linkages the energy and transport sectors have to the rest of the economy.  Although pricing action since 1973 has considerably slowed the growth of energy demands within developed economies, that demand has not contracted.  Moreover, rapid growth in energy demand continues to be experienced in the more successful developing countries.  The fastest growing energy sectors in the world are in the Asian region with a growth rate of 6% in primary energy consumption in 1989 (Foster 1990).  Most of this growth has and will continue to come from coal fired electricity generation.

If global limits on carbon dioxide emissions are to be contemplated, conflicting views are likely to be held on the appropriate basis of these limits.  If based on existing usage levels, global limits on carbon dioxide emissions would have serious implications for the economic growth of the world's poorer nations.  If quotas were to be allocated on, say, a per capita basis, this would result in income redistribution to those nations.  By tentatively agreeing to reductions in emissions, the developed nations are registering their opposition to the issue becoming instrumental in income redistribution.  These wide reaching implications warrant careful consideration of proposals for limiting GHGs.

In the rest of this chapter, we first discuss evidence for the enhanced greenhouse, global warming phenomenon.  Secondly, the likely effects of this hypothetical scenario are considered.  We then present some market based policy prescriptions for counteracting potential problems.  Finally, we examine the likelihood and desirability of achieving international agreement on GHG targets.


THE EVIDENCE FOR GREENHOUSE GASES AND GLOBAL WARMING

THE MECHANISMS INVOLVED

Since the 1890s, there have been scientists who have expressed concern that increases in greenhouse gases related to human activities will enhance the greenhouse effect raising global temperatures further.  Strictly speaking, it is this enhanced effect which is again being debated.

A knowledge of how the earth absorbs and radiates energy is essential for understanding the greenhouse effect.  Briefly, the earth receives radiated energy from the sun mainly in the visible band of the electromagnetic wave spectrum.  In turn, this energy is re-radiated out into space by the earth, mostly at night.  Because the earth is much cooler than the sun, the re-radiation occurs at much longer wavelengths in the infra-red range of the spectrum.  Water vapour and greenhouse gases, such as carbon dioxide, act as a "blanket" around the earth reflecting some of this infra-red radiation back to the earth's surface.  Consequently, temperatures at the earth's surface must rise until the amount of radiation intercepted from the sun, and re-radiated back to the earth by greenhouse gases, is equal to the amount of infra-red radiation escaping into space.

This temperature enhancing effect can easily be observed by comparing temperatures on a cloudy and a clear night at the same time of the year.  The phenomena is also analogous to the warmer temperatures experienced in a greenhouse, and hence, the descriptive name, greenhouse effect.

The major greenhouse gases released by human activities are CO2, CFCs, methane and nitrous oxide.  (Water vapour from natural sources is, in fact, more potent than any of these gases in maintaining warmth on the earth's surface.)  Different gases absorb infra-red radiation at different wavelengths and vary in their potential effectiveness in creating a greenhouse effect.  Though carbon dioxide concentrations are many times higher than methane and CFCs, both the latter gases contribute far more to the greenhouse effect per unit of concentration.  Compared with CO2, methane is 36 times more effective;  CFCs are 14500 to 15000 times more effective, though concentrations are extremely low.  Effectiveness is also related to the length of time a gas remains in the atmosphere.  Table 10.1 shows the relative contributions various Australian sources of greenhouse gases could make to the greenhouse effect after allowing for differences in wanning capabilities and length of time remaining in the atmosphere.  These differences in effectiveness have important implications when considering policy alternatives for counteracting any potential increase in the greenhouse effect.

Table 10.1:  Australian sources of greenhouse gases and their contribution to the greenhouse effect

Greenhouse gas aSourceEmission
(million tonnes
per year)
Relative
contribution to
greenhouse effect
(%)
Carbon dioxideCoal15827
Oil7212
Gas255
Other (Cement, natural gas flares)122
Total26746
MethaneCattle, sheep etc.2.26
Bushfires2.06
Garbage tips1.55
Coal mining/handling0.31
Natural gas leaks0.21
Rice0.21
Other animals0.2
Total6.619
Nitrous oxideAgriculture0.317
ChloroflurocarbonsRefrigeration, aerosols, etc.0.01218

a Ozone, although a greenhouse gas, is not included as it is only emitted in insignificant amounts in the Southern Hemisphere

Source:  Landsberg (1989)


Greenhouse gases are an essential ingredient for maintaining life on earth.  They raise global temperatures by about 33 degrees Celsius (Gribbin, 1988;  Landsberg, 1989) -- just enough to make life comfortable.  Without the blanket created by greenhouse gases, average temperatures on the earth's surface would be -18 degrees Celsius.  Just above the surface of the earth, average temperatures are 15 degrees Celsius.  There is solid evidence to show that greenhouse gases have increased since the time of the industrial revolution, 200 years ago.  But, though there is a theoretical basis for expecting a corresponding increase in global temperatures, it is questionable how large and how important this increase in temperatures may be, and the empirical evidence for global warming is inconclusive.


INCREASED ATMOSPHERIC GAS CONCENTRATIONS OF GREENHOUSE GASES

Chart 10.1 shows the increases in concentrations of the major GHGs since 1750.  Since pre-industrial times, atmospheric CO2 has increased by 26% to a level comparable with that of 150,000 years ago;  atmospheric methane and CFCs have increased markedly from negligible levels.  Much of the increase in carbon dioxide concentrations can be attributed to the burning of coal and other fossil fuels.  At one time the carbon in fossil fuels was itself a part of biological systems, and would have passed from the atmosphere to the biosphere and back again through the processes of photosynthesis and respiration.  Fuel combustion completes another link in this vast carbon cycle -- reconstituting carbon as atmospheric carbon dioxide and making it available to plants once again.  Deforestation also affects the cycle by reducing another "sink" in which carbon is stored and decreasing the area of forest also available for absorbing -- or more correctly "fixing" -- atmospheric carbon.  Consequently, reductions in forest area have also contributed to higher carbon dioxide concentrations in the atmosphere.

Chart 10.1

Concentrations of carbon dioxide and methane after remaining relatively constant up to the 18th century, have risen sharply since then due to man's activities.  Concentrations of nitrous oxide have increased since the mid-18th century, especially in the last few decades.  CFCs were not present in the atmosphere before the 1930s.

Source:  I.P.C.C. (1990)


Whereas carbon dioxide is a naturally occurring gas which is constantly reconstituted in the biosphere, atmospheric CFCs are synthesised compounds directly attributable to man's activities.  For the most part, their synthesised compounds remain resident much longer than naturally occurring gases.

The causes of increased concentrations of methane and nitrous oxide are less certain than for CO2 and CFCs.  Increases in concentration of methane could be related to increased livestock populations, and emissions from garbage tips and rice paddies.  Nitrous oxide increases are also probably related, in the main, to agricultural activity.

A number of empirical sources provide consistent and well documented evidence of increases in atmospheric gas concentrations.  Carbon dioxide concentrations measured in air bubbles trapped in glacial and Antarctic ice show the concentrations were stable at around 280 parts per million (ppm) for thousands of years prior to the industrial revolution.  During the eighteenth century, concentrations began to increase and are now around 350 ppm (Landsberg, 1989).  Similar measurements of methane concentrations in ice air bubbles have shown an increase, over the last 300 years, almost exactly corresponding to growth in human population (Pearce, 1989).

The first accurate atmospheric measurements of carbon dioxide were taken at Mauna Loa (Hawaii) and the South Pole in 1957-58.  As these locations are far away from industrial pollution sources, they provide a good measure of the "mixed" state of the air (Gribbin, 1988).  Similar measurements have also been made in Australia since the late 1970s for carbon dioxide, methane, chlorofluorocarbons and nitrous oxide.  Consistent increases in CO2 have been measured over south eastern Australia.  Even more pronounced increases in methane, CFCs and nitrous oxides have been measured at the Cape Grim Baseline Observatory in north-west Tasmania (Landsberg, 1989)


THE EVIDENCE FOR INCREASES IN TEMPERATURE

Evidence of temperature changes over the past 140 years does not confirm the relationship predicted between GHG concentrations and global temperatures.  Chart 10.2 reproduces the most commonly cited historical time series data for temperature change.  While this data shows an upward trend since the 1860s, it is an inadequate reflection of the predicted relationship and the increase is far less rapid than would be expected.  Most of the increase in temperature occurred between 1900 and 1940, when GHG concentrations were increasing more slowly than at present.  Moreover, between 1940 and the early 1970s average temperatures fell.  Comparing the data from Charts 10.1 and 10.2, it can be seen that gas concentrations increased sharply during that period.  The lack of a direct correspondence between temperature and gas concentrations has lead scientists to suspect that other factors have generated the changes in average global temperatures (Mason, 1989).

Chart 10.2:  Global mean combined land-air and sea-surface
temperatures, 1861-1989, relative to the average for 1951-80
Source:  I.P.C.C. (1990)


These doubts are strengthened by the sparsity and non-random nature of the sample data locations.  The whole of the Atlantic ocean is represented by four island stations, and the majority of measurements have been taken in cities -- which are heat islands.  In addition, the series begins in 1880, the year when records were standardised.  However, 1880 is also thought to have been a year in an unusually cold period.  Lindzen argues that if years prior to 1880 were included overall change would be "0.1 degree plus or minus 0.3 degrees Celsius" (Beckman, 1989, citing a lecture by Richard Lindzen, Professor of Meteorology at MIT).  The variability of the data series is also greater than any long term trend, which raises further questions as to the statistical validity of assuming such a trend exists.

All these factors explain the conflicting reports about global warming.  Basing their statements on time series similar to the one shown in Chart 10.1, some scientists have labelled the 1980s "the hottest decade since records began".  These claims were widely reported in the newspapers in early 1990, and credence was lent to them by the United Nations Environment Program's Intergovernmental Panel on Climate Change (IPCC, 1990).  However, in February 1990 it was claimed that, after correcting for heat island effects, US scientists had discovered "no statistically significant evidence of an overall increase in annual temperature or change in rainfall in continental USA from 1895 to 1987" (Gosling, 1990, citing Newell, 1989).  Similarly Karl, of the Massachusetts Institute of Technology, was reported to have found that there has been "little or no change in world sea surface temperatures since 1850" (Gosling, 1990, citing Karl, 1988).

The picture will remain confusing until it is resolved by better data, especially that based on measurements from satellites.  Precise measurements of atmospheric temperature have only been possible since the launching of a new generation of satellites in 1979.  Microwave radiometers attached to these satellites can measure world-wide temperatures within a day, and cover remote land areas and oceans as well as more populated areas.

Spencer and Christy (1990) have reported on the satellites programme conducted by the National Oceanic and Atmospheric Administration in the US.  They show how the radiometers used enabled a precision of 0.01 degrees Celsius.  Such precision is well within the plus or minus 0.1 degrees Celsius necessary for accurate climate monitoring.  For the decade to 1988, during which the satellite programme had been operating, they found no obvious trend in temperatures.  Over a short period of ten years, cyclical climatic phenomena, such as the 11 year cycle of solar activity, can hide longer term trends.  A longer series of satellite data (at a minimum another ten years) is needed before firm conclusions can be drawn about likely greenhouse trends.

Forecast increases in global temperatures through GHGs rely on computer models which simulate physical climate relationships.  The absence of consistent long-term empirical data to support model forecasts seriously undermines their validity.  Nevertheless, such forecasts form the basis of most calls for international action to counteract the greenhouse effect.  For example, the United Nations Environment Program's Intergovernmental Panel on Climate Change (IPCC, 1990) estimates that emissions of greenhouse gases will lead to a global warming of 0.3 degrees Celsius per decade up to the year 2100 under its "business as usual" scenario.  These estimates are based on complex General Circulation Models (GCMs) and simpler box diffusion models.

GCMs are three dimensional mathematical models which simulate physical processes in the atmosphere or the oceans.  Ocean and atmospheric models can be coupled together to simulate climatic processes more fully, but require highly sophisticated computer technology to be run effectively.  Box diffusion models can be used as a simpler approximation.  However, the CSIRO (Source:  Angus McEwan (1990), Chief, Division of Oceanography, pers. comm.) has criticised the theoretical constructs of the Box Diffusion models used by the IPCC and reiterated the importance of unknown variables in the IPCC report.  These unknowns include:  cloud behaviour;  the behaviour of the oceans;  the sources of the emissions and importance of sinks;  and effects on polar ice sheets.

The behaviour of these unknowns has major implications for all climate modelling.  Of great significance is the effect of differing assumptions about increased cloud cover through higher levels of water vapour in the atmosphere.  Water vapour is by far the most important greenhouse gas.  Increases in global temperatures would lead to increased evapotranspiration, and much of the increase in global temperatures forecast by climate models results from the positive feedback of higher levels of water vapour in the atmosphere.  However, once cloud formation is allowed for, the effect is not nearly as large.  It could, in fact, be negative, depending on whether increases in water vapour result in high or low level cloud formation.  High level clouds bring a warmer atmosphere because they reflect less sunlight and emit less infra-red radiation out to space.  Low level clouds have the opposite effect, cooling the atmosphere.  The UK meteorological office found that introducing cloud formation into its model reduced estimates of the rise in global temperatures from 5.2 degrees to 1.9 degrees Celsius.  Consequently, their model estimates moved from being the highest estimates of temperature increase made by large scale models to the lowest (Mason, 1989).

Scientists are by no means agreed on the size of global warming, or even its existence.  Unfortunately, bureaucratic processes have translated strongly qualified estimates into what purport to be facts.  In the words of the IPCC report, "Although scientists are reluctant to give a single best estimate ... it is necessary for the presentation of climate predictions for a choice of climate predictions to be made" (IPCC, 1990: 15).

The 0.3 degree Celsius increase per decade chosen by the IPCC is not a scientific estimate but a consensus opinion decided by committee.  Ellsaesser (1989) discusses how decisions made by committee transforms preliminary guesstimates into a consensus supported by a constituency with a vested interest in confirming and perpetuating the problem rather than solving it' (Ellsaesser 1989:  67).  Scientific observation and modelling has not proved the existence of the enhanced greenhouse effect.  Perhaps over-cynically, some have suggested that there is a highly articulated demand for global warming by committees and researchers but no evidence of an actual physical supply.


THE EFFECTS OF INCREASED GREENHOUSE GAS EMISSIONS

It is important to emphasise the weak foundations upon which the global warming theory is postulated.  Such caveats are required to avoid giving tacit support to an unproven theory.  However, given the public concerns greenhouse matters have engendered, it is necessary not only to assess the theory, but also to begin asking what are the likely effects (should it be true) and what policies (if any) should be pursued?


TEMPERATURE

The immediate effect on personal comfort of a three degree increase in global temperatures over the next 100 years would not be significant.  As Ellsaesser (1989) has pointed out, few fanners, businessmen, or investors would alter their behaviour after being told that the mean temperature may possibly rise by that amount over the next 50 to 100 years.  Daily temperature fluctuations are well in excess of three degrees Celsius, as is the level of error in daily weather forecasts.  In a life-time, many individuals will migrate between areas with at least that difference in average temperatures.  Admittedly, the change is an average, hiding potentially larger regional and daily variations.  However, a brief consideration of the likely pattern of variation helps answer these concerns.

Most of the warming would take place during the night when greenhouse gases prevent infra-red radiation from escaping.  Increasing night temperatures will increase the number of frost-free days and give longer growing seasons.  In the middle latitudes, a one degree rise in average summer temperatures increases frost-free growing seasons by approximately 10 days (Kellogg, 1989).

Temperatures at the Equator would vary little, but far larger increases would be experienced nearer the poles.  Colder climates could become warmer making them more comfortable to live in;  but the hotter equatorial climates could be no more uncomfortable


SEA LEVEL CHANGES

Accompanying their forecast temperature increases, the IPCC estimated increases in the sea level of six centimetres per decade.  Many uncertainties surround these forecasts.  Beckerman (1990) points out that estimates of possible sea level rises have fallen dramatically over the 1980s.  In 1980, figures as high as eight metres by the end of the next century were being cited.  By 1989 one metre was the accepted wisdom.  The IPCC estimate is lower still;  and, the rise in actual ocean levels (less than two centimetres per decade) has only been one third of the IPCC model predictions.  Beckerman also calculates that the cost of building sea walls to counteract a one metre rise is trivial when placed in a hundred year time horizon.

Sea level changes would mostly result from water in the oceans expanding with heating.  Earlier fears of a collapse of the polar ice caps are now thought to be unfounded (Mason, 1989).  Paradoxically, increases in precipitation could lead to increased ice and snow cover.


PRECIPITATION

Should global wanning be occurring, there will also be increased global precipitation.  In Kellogg's (1989) scenario of changes in precipitation, countries with drier climates would be the main beneficiaries.  These would include Australia, India and North Africa.  Central parts of North America may experience lower precipitation levels.  In addition, Kellogg's scenario suggests that polar regions will be dryer and increases in polar ice and snow may not occur.  All aspects of the scenario are, however, subject to the limitations in the modelling procedure which are discussed by Ellsaesser (1989).


CO2 PRODUCTIVITY

Carbon dioxide and water are the main material inputs to photosynthetic processes.  Singer (1989) has likened CO2 to "plant food".  The IPCC acknowledges this effect stating "Enhanced levels of carbon dioxides may increase productivity and efficiency of water use of vegetation" (IPCC 1990: 2)

Beckerman (1990) discusses how US Water Conservation Laboratory experiments have shown large increases in plant growth with increased CO2 and the same amount of water.  Alternatively, the same amount of plant growth could be achieved with less water.  He also cites studies by the Environmental Protection Authority (EPA) in the US, which estimate how the positive effects of increased carbon dioxide concentration will compensate for the negative effects of a dryer North American interior.  The net effect on US agriculture could range between plus or minus $10 billion.  The worst case amounts to a loss of only 0.2% of total national product.

On a global scale, Kellogg (1989) takes the view that plant growth will increase 5% on average by the year 2000.  He claims the estimated gain so far this century is 10% or more.


GREENHOUSE POLICIES

SHOULD ANY POLICIES BE INTRODUCED AT THIS STAGE?

Aside from the continuing uncertainty about the existence of the greenhouse effect, a major issue is whether it would be appropriate to attempt its arrest.  It is, of course, unfashionable to view man's effects on the environment as being anything other than harmful.  Yet the preponderance of the changes we have discussed are likely to be beneficial -- at least to ourselves.  And whilst it is true that net benefit has rarely occurred where the changes have been unintentional, such outcomes would not be totally unknown.  To what degree, for example, has the world's fish stock increased as a result of the expansion of ocean nutrients which man's presence has brought about?  The much beloved countryside in Western Europe was largely created by man cutting down forests for agriculture and, until the eighteenth century, to manufacture charcoal.

If projections of a greenhouse effect are well founded, the greater precipitation and higher levels of carbon dioxide would, in net terms, allow higher levels of food production.  Almost certainly, these benefits would be sufficient to offset the deleterious effects of climate change in some areas and the inundation of certain islands.  Combating global warming would require quite substantial shifts in arranging production and, on present estimates, significant reductions in living standards.  Dismissing the beneficial implications, one body of opinion calls for urgent pre-emptive action -- even in the absence of conclusive evidence -- since awaiting such evidence may require higher or even prohibitive costs in the future.  Whilst this line of reasoning is plausible, the long list of previous forebodings which have proven unfounded tells us that, had we taken the advice of those offering it, the world today would be much the poorer for it.  Some caution against precipitate and costly action is therefore warranted.

The alternative to the pre-emptive, anticipatory approach is the adaptive or reactionary approach.  Here, countervailing strategies would only be implemented when, and if, the greenhouse effect becomes a problem.  They could involve such action as building sea walls and providing support for relocating population, agriculture or industry.  The old adage is that prevention is better than cure.  But, while the risk of disease is uncertain, the diagnosis hypothetical, and the cure potentially inexpensive, adaptive cures could be preferable.  To extend the analogy, the occasional band-aid and runny nose maybe preferable to perpetual hospitalisation of the healthy.

An approach which incorporates elements of both pre-emptive and adaptive policies is the "no-regrets" policy suggested by White (1990), Schneider (1989), the Australian Treasury (The Treasury 1989) and others.  In the absence of conclusive evidence for a greenhouse effect, a "no regrets" policy would first implement environmental and economic policies which are sensible in their own right.  When more conclusive information on the effects of GHGs becomes available the appropriate action, be it pre-emptive or adaptive, could be taken.

The Australian Treasury (1989) has identified four areas in the Australian economy where sound economic policies could have a beneficial impact on GHGs.  The first relates to efficient management of forest resources which are net absorbers of CO2.  However, as we noted in Chapter 5, the best forest policy option for absorbing carbon dioxide is not forest preservation.  Instead, trees should be harvested on a regular cycle encouraging higher growth rates and younger forests which have higher rates of carbon fixation.  Such a management strategy would reduce atmospheric carbon, as long as the wood harvested from forests is used as building materials and not rotted away.  Of course, in the end, wood products will decay, but holding carbon in wood products for long periods of time reduces the rate of net emissions.

The second "no regrets" policy area is correct market based pricing and investment criteria for electricity utilities.  Fossil fuel based electricity production contributes some 44% of anthropogenic CO2 emissions in Australia (Marks, et al. 1989).  The Treasury (1989) suggests that subsidised electricity generation has resulted in massive investments in large power stations through the acceptance of lower than market rates of return.  Similarly, the Industries Commission (IC, 1989) has found that electricity tariff structures do not encourage efficient consumption.  It is possible that corporatisation or privatisation of state electricity commissions could alter investments in power generation capacity and consumption patterns in favour of reduced CO2 emissions.  However, it could also result in lower electricity costs and prices and increased consumption.  Conclusions about the net effect of market oriented energy policies on GHGs should be treated with caution.

The third "no regrets" policy area could be in transport regulation and pricing.  The transport sector is the second largest contributor to anthropogenic CO2 emissions, being responsible for 21% of total emissions (Marks et al. 1989).  The Treasury suggests that removing inefficiencies in coastal shipping and the waterfront would encourage a move from land to water based transport of freight.  Removing inefficiencies in railways and a better system for charging heavy vehicle road usage may effect a shift from road to diesel based rail transport, with consequentially lower CO2 emissions.  However, this would be partly offset by the suggested removal of rail subsidies and removal of restrictions on the use of road transport.

Finally, lowering effective rates of assistance in the agricultural sector could also impact on GHG emissions.  The overall rate of assistance to agriculture is 9%, which is lower than in many of Australia's trading partners.  But, based on 1C estimates, the Treasury identifies very high rates of assistance in two sub-sectors which generate high levels of methane emissions.  Dairying has a 55% effective rate of assistance, and rice a 50% effective rate.  From a global perspective, if other countries also reduced rates of protection, Australia could end up producing more of these products.  However, the world cost would be raised and, with higher prices, demand would be lower.  But, again, it is difficult to draw firm conclusions that such measures would have a positive impact in reducing GHG emissions because the displaced demand might even re-emerge in areas of consumption and production which result in equivalent effects.


EXPLORATION OF POSSIBLE POLICY APPROACHES INVOLVING COSTS

Though the impact of "no regrets" policies on GHGs is not easily determined, there are other compelling reasons for pursuing them.  Deciding on pre-emptive or adaptive polices over and above this response is far more difficult.  However, if we assume that global warming will occur, two guidelines for directing policy decisions can be identified.  First, the costs and benefits of alternative policies need to be care fully evaluated.  Ideally, evaluations should be made for different levels of greenhouse gas reductions rather than being constrained to a given target.  Secondly, where possible, market mechanisms should be used to enable any policy to be implemented at minimum cost.  Market mechanisms allow flexibility in technology choice and create incentives to choose the most efficient means of achieving reductions in GHGs, if this is desired.


EVALUATING COSTS AGAINST A RANGE OF NEGATIVE GREENHOUSE BENEFITS

The costs of pre-emptive greenhouse policies are not small.  Marks et al. (1989) estimate that for Australia to reduce carbon dioxide emission by 20% by 2005, in line with the Toronto target, would mean real increases of electricity tariffs by at least 41% and fuel prices by somewhere between 84% and 169%.  Real wage levels, which in the base case are estimated to grow by 0.29% per annum (the low rate reflecting the need to stabilise overseas debt), would decline by 0.18% per annum.  Thus, an aggregate increase in real wages over the seventeen years of 5% would become, instead, a decrease of 3%.  Many of the premises Marks et al. use are designed to present conservative cost figures.  For example, technological improvements in fuel efficiency are assumed.  Nonetheless, the total cost, expressed as the present value of GDP foregone, would be $31.7 billion in 1990 dollars at a five per cent discount rate, or 8.4% of GDP.

Some studies have contested these findings and claimed positive economic gains from implementing the 20% target (for example, see the report by Deni Greene Consulting Services, 1990).  Such gains are purported to come from using less costly forms of energy.  Before their claims can be credible, those attempting to justify GHG targets as a "costless" -- or, indeed, cost saving -- measure must demonstrate compelling reasons why individuals have not already chosen such technologies.  Government manipulation of peoples' behaviour forces them to make choices they would not otherwise make, and usually leads to losses in individual welfare.  Apart from removing existing government distortions to market signals, it is doubtful that stringent policies to achieve greenhouse targets can be achieved without significant economic costs.  Many overseas studies confirm this view (see Nordhaus, 1990, for a review).

From a New Zealand perspective, large increases in electricity tariffs would not be necessary because of the use of hydro-electric power rather than coal for electricity production.  However, fuel prices would need to rise significantly if carbon emission reductions are to be achieved.

The measures by Marks et al. focused on the cost of achieving a given level of CO2 reduction.  A more comprehensive approach would be to calculate the costs and benefits of achieving different levels of reduction.  Small reductions in GHGs, can quite likely be achieved at minimal cost.  Additional reductions will impose an increasing burden on society.  If a target is to be imposed it should be set at the point where the costs of an extra unit of abatement just equals any net benefits obtained from GHG reductions.

Nordhaus (1990) has attempted to place current estimates of the costs and benefits of different levels of GHG reduction within a standard conceptual framework.  This allows each action's response to be calibrated so that a menu of different measures can be selected from, and costs and benefits equilibrated.  He estimates that the benefits from GHG reductions only justify a 10% reduction in current GHG emissions, measured in carbon dioxide equivalents.  The 10% reduction allows for all economic costs identified in the study and uses a discount rate 1% above the rate of growth in output.  Phasing out CFCs together with a very small reduction in CO2 emissions would be sufficient to achieve the reduction.  Even if a higher level of damage is assumed (Nordhaus's "medium" damage function), the efficient level of reductions in current CO2 emissions would only be 6%.  This is much lower than the Toronto target of 20% below 1988 levels.  The implication is that the costs of major reductions in CO2 emissions far outweigh any hypothetical benefits from reducing global warming.

The costs of an adaptive approach are potentially far smaller than pre-emptive policy targets.  In addition, higher economic growth from postponing precipitate actions, provides an expanded income base out of which adaptive costs can be paid.  Although we have seen no estimates of adaptive costs for Australia and New Zealand, some overseas calculations have been made.  As previously addressed, total losses in the US agricultural sector from doing nothing at all about GHGs are only 0.2% of national product, under a worst case scenario (Beckerman, 1990).  In Australia, where rainfall is liable to increase, there is more likely to be a net benefit to the agricultural sector.  Based on EPA data, Beckerman estimates that the full once only capital cost of building sea walls to protect US cities from a one metre rise in sea levels by 2090 would be only 0.27% of one year's GDP.  Beckerman also cites Cline (1989) as estimating that the total costs of building sea walls world wide, plus the value of land lost in Bangladesh, amounts to only 1.06% of estimated global GDP in 2090.  As he points out, over a 100 year period, such costs are clearly affordable.  Problems of compensating losers do however arise, but, assuming pre-emptive abatement cost are much higher, it would benefit all countries concerned to negotiate a compensation agreement.


USING MARKET MECHANISMS TO MINIMISE COSTS

Despite the evidence that setting greenhouse targets is at best premature, many countries including Australia and New Zealand have begun to move in that direction.  If such a pre-emptive policy is to be adopted, market mechanisms must be employed to minimise its undesirable economic impact.  The ability of a market for greenhouse emissions to develop automatically seems improbable, perhaps as unlikely as in the classic case of defence.  Accordingly, governments must determine acceptable levels of outputs for these by-products of production.

Ideal approaches for greenhouse gases have been previously alluded to.  They would be to establish a base of globally permitted outputs and then determine the levels of allowable emissions either:

  • by reference to existing usages, say, in each country;  or
  • by reference to population levels.

The former has the advantage of minimising the impact of collective decision making on the income generation process.  The latter, however, starts from the basis that all people have an equal right to live.

The application of market processes would require that emission trading be permitted within nations, between nations (or preferably between firms and individuals of different nations) and between gases -- the value of which would be negatively weighted in accordance with their greenhouse enhancing characteristics.  If buyers are free to search out supplies, and their owners free to dispose of them at the best price, we have the ideal circumstances for efficient trade to take place.

A market in greenhouse gases, tapered towards reducing permissible levels of outputs, would limit/change the disbenefits they generate, but at a lower cost.  Such a market would remove the necessity for arbitrary bans on certain sources of GHGs, and the price mechanism would squeeze out those offering more damage per dollar.

The "command and control" solution would assign permits to each supplier and each user's needs would be carefully vetted.  The market approach simply allocates the total quantity of acceptable incremental carbon and leaves the price system to determine which users and uses will be accommodated.  Incentives are generated both for users to economise on the substances and for suppliers to provide those which generate least relative harm.

Unfortunately, it is unlikely that such a pure system will be allowed to operate.  In the case of CFCs, "political realities" have forced governments -- as diverse in ideology as those of the UK and the States of Victoria and Tasmania -- to insist upon certain overarching regulations.  In the main these regulations seek accelerated phase-out of uses like aerosols, for which substitutes are most readily available.  Though such requirements accord broadly with economic substitutability, political requirements will be much less flexible than those of the marketplace;  exceptions within the targeted categories will be necessary -- for example, in the case of aerosols and for certain specialised medical usages.  Avoidable administrative and lobbying costs will be incurred.

The experience with Australia's Ozone Protection Act does, however, provide interesting corroboration of the power of market systems to allocate goods in a cost effective manner.  The Act requires a 50% reduction in usage, with tradeable permits having been given to those supplying CFCs and halons in proportion to their 1986 production/import levels.  An accelerated phase out of aerosol usage (about 30% of 1986 demand) was the main mandatory feature, and a tax of $1.06 per kilogram was imposed to cover administration costs.  Aerosol usage in 1990 was down to 500 tonnes or 6%.  The price of the gases rose from about $2.50 per kilogram to $6 by the third quarter of 1990, and some 40% of the original quota allocation has been traded.  Although there are obligations on firms to recycle, these are not policed.  Firms have, instead, taken active steps to conserve and re-use the chemicals internally;  and devices are being marketed to reduce waste.  Technological adaptation has been encouraged by the price rise;  and Dupont has, in fact, developed a new product in Australia to replace frion 12 -- the normal gas used in automobile air conditioning.


PROSPECTS FOR INTER-GOVERNMENT AGREEMENTS

A system of tradeable rights to GHGs, similar to that of CFCs, could be adopted to combat global warming should a scientific consensus emerge that such action is necessary.

This is not to deny that it would be considerably more difficult to arrange equitable transfers to prevent global wanning.  Even if the overall effect is proven -- and proven to be negative -- countries like Australia and New Zealand, along with Canada and most of the Soviet Union, could gain on balance.  A crucial question is then:  what is the likelihood of achieving global agreement on reducing GHG emissions, should it be desirable?

Devising acceptable payments for the global externalities generated presents some of the most difficult compensation arrangements imaginable.  Ownership of the upper atmosphere at the present stage is impossible to envisage.  Drawing upon Demsetz (1976) and Hardin (1968), the likely conclusion is that agreement to internalise global property externalities will be impossible;  whilst proposals for countries to act in concert will fail as a result of free-rider problems.  Each country would do better for itself if it refrained from joining (or joined and subsequently reneged on) any agreements, providing others incurred the costs of keeping them.  Sanctions on agreement breakers would appear unlikely to work.  Furthermore, developing countries would see much less gain in cooperating if the system took as given existing usage levels of, say, ozone depleting substances and carbon emissions.  Their application would be all the more trenchant as their own usage of these substances would not have contributed to any problems generated.

Barrett (1990) notes that some international agreements do seem to be operating satisfactorily without any real sanctions.  He regards the 1946 Convention for the Regulation of Whaling to have been woefully ineffective;  even so, whaling activities have been sharply curtailed and previously endangered genotypes of whale are increasing in number.  He considers the more successful agreements to have been:  the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer;  the World Heritage Convention;  and the European Thirty Per Cent Club' to reduce trans-border sulphur emissions.  To these could be added limited but significant areas where the world community has adopted sanctions -- for example, financial and other sanctions against South Africa, banning arms supplies to certain "renegade" states, and preventing trade in narcotic substances and (probably misguidedly) in certain endangered species.

Barrett attempts to model how far countries would find it in their interest to undertake abatement activities.  In Chart 10.3, each country is assumed to be identical and able to choose a level of abatement whereby its marginal costs equal its marginal benefits.  If Q1 is the optimal level of abatement for each country (chosen on the basis that all others would co-operate in a similar fashion), these choices would sum to 2, which would be the optimal global level of abatement.

Chart 10.3:  Potential gains to co-operation


As Barrett notes, these gains from cooperation would be frustrated by free-riding -- where the countries' own marginal benefit is achieved whether or not it incurs the necessary costs itself.  Indeed, the outcome could be inferior to that preceding any failed agreement.  The prospects of achieving cooperation are stronger where each country's costs of undertaking abatement are low and the benefits from such action are high.

Thus, drawing from US estimates, the global costs of winding down usage of CFCs are of the order of $200 billion, whilst the benefits from preserving the ozone layer are arguably many times this.  The costs of dispensing with ozone depleting substances or substituting for less destructive forms are relatively low.  Given all these factors, workable levels of global cooperation appear likely to be achieved;  but this is much more difficult to envisage with carbon emissions.  Aside from the greater uncertainty concerning the deleterious effects of burning fossil fuels (of which coal is regarded as the most injurious), it is considerably more expensive to take evasive action in shifting out of coal and lowering production generally.  This is all the more likely given the less certain and highly variable inter-country consequences of greenhouse.

For the generality of unownable environmental goods, scepticism is warranted about the prospects for policies which are desirable in an aggregate sense but inimical to each country's interests when examined in isolation.  If a country like Australia can only make a 1% contribution to defraying a common problem, but is required to incur significant costs, then lasting agreement appears unlikely.  With New Zealand's contributions to GHG emissions being even smaller than Australia's by global standards, the costs of action create even stronger incentives to renege on emissions reduction.  Unilatural reduction by such a small nation would be absurd.

Sticking to an agreement, where sanctions may not be imposed and where any benefits can be reaped without adherence, appears to make little sense if nations' overwhelming motivation are self-interest.  And yet, powerful though the pursuit of individual self-interest is, it is not people's exclusive motivating force.  It is possible to observe a considerable number of actions, ranging from voting in elections through voting for political parties against the voter's self interest, to voluntary and anonymous charitable donations -- all of which are difficult to place within the economist's rubric.  In addition to such ostensibly altruistic actions (perhaps motivated by self satisfaction), are those where esteem is obtained from conduct seen by others as making worthy sacrifices.  On an individual level, we have already seen actions with regard to CFCs where people have voluntarily chosen less efficient aerosols out of concern for the environment.  Commercial firms have also voluntarily undertaken expenditures of an environmental preserving nature.  This also may in part be more conventionally motivated.  Mitsubishi was clearly alarmed by proposed boycotts of its product range, advocated by green activists because of the involvement of one of its subsidiaries in Indonesian logging.

Most calls by interested parties, which appeal to some greater good and appear to be made notwithstanding vested interest, contain at least an element of hypocrisy.  This is clearly the case in such matters as tariffs and occupational licensing.  It is equally the case where neighbours seek suppression of emissions at a factory which was in place before they arrived.  In order to gain support, advocates appeal to a wider audience;  such support will be forthcoming where the target group loses little from the intervention and may, indeed, see it as creating a precedent for analogous action which it may itself wish to obtain in the future.

Whatever the motivations, willing sacrifices are made by individuals.  It is equally clear that nations too will adopt policies which are seemingly irrational, given known free-rider opportunities.  In some cases self interest may play a part.  It could be argued that World Heritage Listing adds to a country's tourist attractions -- though it is doubtful that it would do so to the extent of Australia's substantial listings.  A nation might, like a private individual, forgo obvious income earning capacities in one direction for fear of retaliation on the generality of its products.  But even this requires an explanation of why the previously willing buyers of those other products would forego opportunities for satisfying their needs at the least cost.  South Africa has suffered from lower prices for most of its exports because potential customer nations were willing to deny themselves the most advantageous supply, financial sanctions on South Africa while causing that country to incur additional costs have only been possible because international savers and bankers have willingly accepted somewhat lower profits than they might otherwise have obtained.

In many cases, there can be no other interpretation of a nation's actions in denying itself income than that it is motivated by altruism, in spite of this being combined with the kudos of approval from other nations.  Even if apparent altruism on the part of a particular party is mixed with a narrower form of self interest, it is nonetheless present.  Clearly, narrow self interest (which is the engine driving most economists' paradigms) is not as robust as the profession sometimes imagines it to be.

The mixture of incentives other than narrow self interest which motivate people in modern societies may well be less potent than those found in tradition directed societies which preceded the modern era.  These other considerations may also be less universal within the present company of nations than they were in the close-knit societies that preceded them.  They are, however, clearly potent and sufficiently well observed for certain applications;  and may become even more so as the world is no longer divided into Socialist and Capitalist camps.

The forgoing is not intended to deny the potential for strategic behaviour by countries attempting to free-ride on another nation's altruism.  It simply indicates that where global problems are clearly manifest, agreement can be reached.  The agreement by signatories to the Montreal Convention to reduce ozone depleting emissions is a clear example.  It remains to be seen whether or not this may set a pattern for GHG reduction, should such measures be deemed warranted.


SOME SUGGESTION ON THE NATURE OF A GLOBAL TREATY

Quesada (1989) advocates a global treaty which sets values on the (negative) economic value of carbon dioxide and other greenhouse gases and allows a mix of actions to counter the threat.  Those actions could comprise regulation to reduce outputs, taxation of emissions and subsidies for global sinks to allow carbon build-up.  In this he recognises the value of assigning prices and allowing trading with the prices passed on in the form of discounted costs of flooding and other deleterious consequences.

Tolba, the Executive Director of the UNEP, also sees an opportunity to make use of market mechanisms by having developed nations compensate poorer countries for not engaging in expanding their use of ozone depleting substances (Tolba, 1990).  He estimates the worth of such measures (which he says should be additional to on-going aid disbursements) at $2 billion to $5 billion over the next 10 years.  To raise these sums, he advocates a form of user fee to be paid into and disbursed by an international agency.  In allocating the funds, he envisages particular priority would go to preservation of natural forest and other environmentally valued goods.

Somewhat incongruously, Tolba adds that the fees "cannot, of course, be used as a licence to pollute".  That aside, his proposal has merit if it could be administered and if fees could be collected without falling foul of the political manipulation and overstaffing which seems to be a feature of UN bodies.

Where particular countries have especial scope to engage in abatement strategies which benefit all countries, the opportunity exists for side payments to be made.  Tropical forests are the most potent converters of carbon dioxide into oxygen and are largely located within poorer countries.  If they must be preserved for the benefit of mankind as a whole, it is unconscionable for the rich nations (who became so partly because they cultivated or built upon their own virgin lands) simply to demand their preservation.  Perhaps half of the world's rainforest is located in Brazil.  If Brazil were to cease clearing this forest at its present rate, its future income would be reduced.  As all countries consider they would benefit from retaining the Brazilian rainforest, it might be possible to arrange payment.  Voluntary payment in the form of private endowments buying tropical forests has already taken place (see Chapter 5), though it is unlikely that such endowments will be adequately funded to undertake activity of this nature on the scale seen as necessary by many authorities.


CONCLUSION

Typically the greenhouse effect has been painted as a "disaster" scenario.  The often implicit assumption has been made that the risk of environmental damage is large but the economic costs of avoiding it are only moderate.  Current scientific and economic evidence does not support this contention, and there is much uncertainty about the enhanced greenhouse effect.  Because of these questions, rational policy responses are difficult if not impossible to make.

Prematurely opting for pre-emptive polices and setting stringent emissions targets will have large and certain costs for New Zealand and Australia, but have little impact on global GHG emissions.  Australia only accounts for 1.2% of total world carbon dioxide emissions, New Zealand for significantly less than that.  Reducing GHGs requires a global response.  While the benefits from GHG reductions are uncertain (and potentially small), the costs are large.  Consequently, at this juncture, the achievement of global agreement will be very difficult, in contrast to ozone depleting gases where costs and benefits are more certain.  In addition, limiting economic growth in a time of economic recession, mounting foreign debt and growing unemployment, will limit the ability to research and discover answers to fundamental question about global climate change.

Within another 10 years satellite data will have begun to either corroborate or refute the global warming hypothesis.  Within the broad sweep of the hundred of years required for extensive global climate change, ten years is insignificant.  Waiting that period of time will provide the knowledge and resources necessary for implementing countervailing polices.  Delayed action, while pursuing active research into the problem, is the only sensible response.



REFERENCES

Barrett S. (1990) "The Problem of Global Environmental Protection", Oxford Review of Economic Policy, 6(1) Spring: 68-79.

Beckerman, W. (1990) Global Warming:  An Economic Perspective;  or Global Warming:  A Sceptical Economic Assessment, Oxford: Balliol College, provisional draft of September.

Beckman, P. (1989) "What Warming", Access to Energy, 17(3).

Cline, W. (1989) Political Economy of the Greenhouse Effect, Washington D.C., Institute for International Economics, preliminary draft of August.

Demsetz, H. (1976) "Toward a Theory of Property Rights", American Economic Review, 57: 347-359.

Deni Greene Consulting Services, (1990) "A Greenhouse Energy Strategy: Sustainable Energy Development for Australia", A Report Prepared for the Department of Arts, Sport, the Environment, Tourism and Territories, Canberra (February).

Ellsaesser, H.W. (1989) Response to Kellogg's Paper in S. Fred Singer (ed.) Global Climate Change:  Human and Natural Influences, New York: ICUS, pp. 67-89.

Gosling, T. (1990) "Are We Getting Warmer?", The Herald, 20th Feb.

Gribbin, J. (1988) "The Greenhouse Effect -- Inside Science No 13", New Scientist, October 22nd.

Hardin, G. (1968) "The Tragedy of the Commons", Science, 162, pp. 1243-1248.

(IPCC) Intergovernmental Panel on Climate Change (1990) Policymakers Summary of the Scientific Assessment of Climate Change Report Prepared for IPCC by Working Group I, Bracknell, United Kingdom: United Nations Environment Programme and the World Meteorological Organisation (June).

Karl, T., Baldwin, R., and Burgin, N. (1988) Time Series of Regional Seasonal Averages of Maximum and Minimum Average Temperatures Across the USA, Ashville, North Carolina: National Oceanographic and Atmospheric Agency (March).

Kellogg, W.W. (1989) "Carbon Dioxide and Climate Changes:  Implications for Mankind's Future", in S. Fred Singer (ed.) Global Climate Change:  Human and Natural Influences, New York: ICUS, pp. 37-65.

Landsberg, J.J. (1989) "The Greenhouse Effect:  Issues and Directions -- An Assessment and Policy Position Statement by CSIRO", Occasional Paper No. 4, Melbourne: Commonwealth Scientific and Industrial Research Organisation.

Marks, E.M., Swan, P.L., McLennan, P., Schodde, R., Dixon, P.B., and Johnson, D.T., (1989) "The Feasibility and Implications for Australia of the Adoption of the Toronto Proposal for Carbon Dioxide Emissions", Report to CRA Limited (September).

Mason, B.J. (1989) "The Greenhouse Effect", Contemporary Physics, 30(6): 417-432.

Newell, R., Hsiang, J. and Zhongxiang, W. (1989) "Where's the Warming", MIT Technology Review, (Nov. and Dec.).

Nordhaus, W.D. (1990) To Slow or Not to Slow:  The Economics of the Greenhouse Effect, New Haven, CT: Yale University.

Pearce, F. (1989) "Methane:  the Hidden Greenhouse Gas", New Scientist, May 6th.

Quesada, A.U. (1989) "Greenhouse Economics, Global Resources and The Political Economy of Global Change", Environmental Policy and Law 19(5): 154-161.

Schneider, S.H. (1989) "The Changing Climate", Scientific American, September.

Singer, S.F. (ed.) (1989) Global Climate Change:  Human and Natural Influences, New York: ICUS.

Spencer, R.W. and Christy, J.R. (1990) "Precise Monitoring of Global Temperature Trends from Satellites", Science, 247: 1558-1562

The Treasury (1989) "Developing Government Policy Responses to the Threat of the Greenhouse Effect", Economic Round-Up -- November 1989, Canberra: Australian Government Publishing Service, pp. 3-20.

Tolba M.K. (1990) "Financing Global Environment Problems", Address to Commission on Environment, Document 210/333, United Nations, Paris 1990.

White, R.M. (1990) "The Great Climate Debate", Scientific American, 263(1): 18-25.

Air pollution

EXECUTIVE SUMMARY

Air pollution is an inevitable consequence of living.  It can be eradicated only at costs which would be totally unacceptable.  Air pollution comprises many different elements.  As a phenomenum, it is associated with large population concentrations and industrial emissions.  It has, however, much diminished as a cause of harm and irritations to those potentially suffering its consequences most severely -- people located within conurbations.  This is notwithstanding continued population growth.

More benign levels of air pollution over recent year scan be traced to a number of features:

  • industrial change, whereby a lower proportion of needs in progressively richer societies come from the outputs of "smokestack" industries;
  • the dispersion of "smokestack" industrial facilities away from central city areas to the peripheries of conurbations',
  • regulatory controls on emissions from individuals (cars and household heat generation) and industrial facilities.

Countering air pollution presents an unusual challenge to those designing taxes, charges and remedial expenditures.  Those who are affected by pollution cannot easily come together so that they may bargain with polluters over a mutually agreeable level of emissions.  Moreover, the great bulk of pollution is emitted by numerous sources -- and further, the "polluter", and its individual "victims", are by and large the same.

Markets, incorporating vested ownership and tradeable rights, are generally the most efficient means of bringing about optimal abatement expenditures.  But, given the major problems of monitoring vast numbers of minor sources of air pollution, with present technology, there are at present some apparently insuperable problems involved in developing and monitoring the contractual approaches which can allow market solutions to operate.  "Command and control" solutions remain the best option for these sources.

In the case of major sources of air pollution, the prospects of enlisting market based mechanisms offer much more promise.  In particular, tradeable rights in pollution have been demonstrated to save considerable costs where some sources can achieve abatement levels more cheaply than others.  For this reason, tradeable rights offer greater flexibility and cost savings than the alternative market based instruments, taxes or charges.


INTRODUCTION

Air pollution agents are manifold.  Those specifically targeted for control normally include particulate matter (smoke), ozone, sulphur dioxide, carbon and lead.

Over the years the problem of air pollution in western countries has been successfully addressed.  In London, in 1952, some 4000 deaths resulted from an extended period of air pollution.  Today the air is much cleaner, notwithstanding much increased traffic and higher energy generation.  Diseases associated with pollution, like influenza, pneumonia and tuberculosis, were responsible for about one quarter of deaths at the turn of the century and now account for less than 5% in a population where life expectancy has increased by over a half.  It might be said that the market for death is declining and the market share of pollution related causes falling! To be sure, much of the improvement stems from factors like improved medical treatment, but to a major extent it is due to a cleaner urban environment.

Curiously, one of the patron saints of environmentalism, Paul Ehrlich, also takes the view that air pollution is a readily resolvable problem.  Ehrlich's view might be conditioned by his ideological battles within the environmentalist movement in the course of which he has sought to propel population growth to an ascendancy which others have rejected.  In his interview in The Ecologist (1973) he said:

... from the point of view of an ecologist ... (air pollution is) one of the relatively trivial problems.  It is amenable to rather rapid technological cure and is just a symptom of some of the things we're doing, rather than something ecologically serious.

Of course, it could be argued that Ehrlich, who was forecasting widespread famine by the early 1980s as a result of Malthusian analyses of population growth, has been discredited (apart from within a particularly bizarre wing of the environmental movement).


AIR POLLUTION IN AUSTRALIA

Air pollution levels in major Australian cities have generally shown an improvement over recent years.  In Melbourne, sulphur dioxide levels have trended downwards and in 1988 were less than one third of the maximum acceptable peak levels.  Chart 9.1 illustrates peak one hour and 24 hour sulphur dioxide (SO2) and airborne particle trends for the industrial suburb of Footscray.  SO2 levels in Australia are low by world standards because of the low sulphur fuel used.

Chart 9.1:  Peak one hour and 24 hour sulphur dioxide (SO2)
and airborne particle trends -- Melbourne (Footscray)
Source:  Vic. EPA


Carbon monoxide (CO) levels have also trended down to magnitudes well within the maximum acceptable, although nitrogen dioxide (NO2) levels have remained relatively close to their "maximum acceptable level" and ozone (Os) levels are above those defined as acceptable.

Broadly comparable findings to these were reported for Brisbane by Verrall and Simpson (1988).  At the time of Queensland's Clean Air Act of 1963 and the coming into force of regulations giving effect to it (in 1968) major pollutive industries in Brisbane included four coal fired electricity generating stations, brickworks and a host of other coal and wood burning facilities.  Since then, although there has been a six-fold increase in the number of industrial premises, other factors have acted to diminish pollution levels including:

  • coal powered electricity generation has ceased within the metropolitan area;
  • railways have been electrified;
  • domestic burning has been banned.

From the late 1970s, ozone levels have remained similar to those observed earlier;  NO2, lead and SO2 show slight declines;  smoke has declined markedly (and visibility has improved);  and CO has shown a major decline (see Charts 9.2 to 9.4).

Chart 9.2:  Peak eight hour and one hour averages
for Carbon Monoxide (CO) levels -- Melbourne region
Source:  Vic. EPA


Chart 9.3:  Peak 24 hour average nitrogen dioxide (NO2) levels -- Melbourne region
Source:  Vic. EPA


Chart 9.4:  Peak one hour average ozone (O3) levels -- Melbourne region
Source:  Vic. EPA


The abatement of urban air pollution levels have been achieved by "command and control" regulation.  Where markets do not automatically equilibrate supply and demand because of monitoring difficulties, total permitted supply must be specified by a government authority.  Such quasi-market approaches will pay dividends when applied to some sources;  however, continuation of more directive "command and control" approaches seems to be inevitable in the case of domestic and, perhaps, automotive emissions.  In a strict sense, therefore, the achievement of efficiency largely turns on the nature of the regulation.  If market mechanisms are employed to allow polluters flexibility in meeting the levels desired, we can obtain the same outcome at a reduced cost.


ADDRESSING THE EXTERNALITY OF AIR POLLUTION

THE NOTION OF SOCIAL COSTS

Air pollution was the example used by Pigou (the economist responsible for pioneering the study of welfare economics).  In developing the notion of externalities, Pigou sought to illustrate the difference between private and social costs by posing the issue of a factory making use of inputs for which it paid, and inputs (say, the atmosphere) for which it did not pay but soiled (to the detriment of its neighbours).  If it faced diminishing returns and if each private input was valued equally, the factory's production could be represented by Table 9.1

Table 9.1:  Private marginal gain from a factory operation

Output (1)Value of output
(2)
Marginal value
of output (3)
Marginal input
cost (4)
Marginal
economic gain to
the owner (5)
($)($)($)($)
00000
126261214
250241212
372221210
49220128
511018126
612616124
714014122
815212120
91621012-2

Under these circumstances, the owner would produce up to the level (8 units) where his marginal costs equalled the marginal value of his inputs.

If uncontracted costs which cannot be charged for are added to this example, and these costs rise at a constant rate with each additional unit of output, then the marginal economic gain to society is different from that of the owner (shown in column 5, Table 9.2).

Table 9.2:  Social marginal gain from a factory operation

Output (1)Marginal private
economic gain (5)
Value of uncontracted
costs (6)
Marginal social
gain (7)
($)($)($)
0000
114212
21248
31064
4880
5610-4
6412-8
7214-12
8016-16
9-218-20

From his analysis, Pigou concluded that a tax equal to the uncontracted costs should be imposed -- a tax which would bring production back to four units.

Freeman, Haveman and Kneese (1973) in Chart 9.5, offer a diagrammatic version of the externalities which Pigou was describing.  They do so by examining both negative and positive cases.

Chart 9.5


Coase (1960) showed that there is no more certainty that the pollutees have a right to clean air than the factory owner has the right to use the air.  Clean air is a common, open access resource owned by neither party.  There is a mutuality of interest rather than an automatic onus upon the polluter.  Coase's analysis demonstrated that efficiency will be arrived at where exclusive rights are given either to the neighbours or to the factory owner provided that there are no costs in arriving at the transaction.  There are, of course, as many complications thrown up by this solution as there are insights provided by it.  Importantly, it is uncertain how the neighbours, in particular, could agree on an optimum level of pollution at specific compensation levels, and how they might ensure the factory's pollution is monitored effectively.

Others have pointed out that mankind begins modifying his environment as soon as his existence is significant.  Moreover, rights once seized or acquiesced assume a value.  The factory owner's costs are capitalised as rents within his production capabilities and should he on-sell the original factory, the new owner would have had the expectation that any free inputs would continue as such.

The real problem with externalities is their pervasive nature.  Most attention is focussed on the adverse externalities -- the "bads" like pollution.  However, actions of others also result in unmerited increases in wealth.  Such positive externalities occur where, for example, a neighbour maintains a highly attractive garden which raises others' enjoyment of their own property and may even increase its value.  Other forms of unmerited increases in property values occur where "gentrification" of hitherto blighted inner city properties takes place.  Similarly, certain skills like those of business economists become more highly prized when a more liberal banking regime allows increased competition in this sector.  Much the same may be true of journalists, telephone technicians and airline pilots following relaxations of regulatory arrangements in areas where they are qualified.  A road development will change the value of properties around it -- possibly reducing the value of those properties which are adversely affected by noise, and increasing the value of those which benefit from greater transportation convenience.

Wherever possible, the agents of change will attempt to garner the maximum rents from the change;  but full capture will never be practicable.  Indeed, it is the lack of full capture of rents that has been responsible for much of the trickle down' of wealth from those generating increases in wealth to the community in general.  Even where -- as in the case of intellectual properties -- new forms of rights have been developed, the full capture of the benefits of inventions is barely conceivable.  It would require the inventor to charge each user a separate price based on the user's "willingness to pay".


DETERMINING THE CORRECT OVERALL
LEVEL OF PERMITTED EMISSIONS

The various approaches to emission control which have been discussed are alternatives to "command and control" approaches.  Only a pure Coasian approach makes full use of markets.  Both taxes and tradeable emissions require limits to be specified by governments rather than traded off between polluters and pollutees as is the case in true markets.

For all other approaches, the government, at a minimum, specifies the level of tolerable emissions by examining the costs and benefits.  Costs of pollution include those impacting on health and the various sensory perceptions.  One way of avoiding imposition of officials' choices in this process is to attempt to measure individuals' subjective values.  This involves constructing shadow prices based on willingness to pay.  This, the contingent valuation method, uses market research techniques in an attempt to determine appropriate values.

Commonly, questionnaires are devised describing the goods under scrutiny and asking how much respondents would be prepared to pay.  The techniques of market research are well known and used extensively both in business and politics.

Some of the difficulties of attempting to assign values in this way are exemplified in a study by Tolley and Randall (1985).  Researchers inquiring in the Chicago area about the value of preserving air quality in the Grand Canyon expressed the question in two different ways:

  1. for the Grand Canyon alone, after respondents had been shown photographs;  and
  2. as part of a three part sequence which sought values for cleaner air in Chicago, in the Eastern United States and in the Grand Canyon.

In the first study, the value per head for clean air in the Grand Canyon was $90, whilst in the second it was $16.  Values were calculated by asking each respondent how much they would be willing to pay for a clear view.  Individual valuations were then summed and divided by the number of respondents to give the "average" benefit each would receive from eliminating air pollution.

The study exemplifies the pitfalls inherent in contingent valuation.  The approach's deficiencies are first, it specifies values based upon average utilities whereas demand and supply for goods in general is determined by marginal costs and marginal benefits.  Thus, a given consumer may value Bounty Bars at $20 and, at a market price of $1, obtain surplus value of $19;  but this surplus value is irrelevant to decisions about whether or not the good is produced -- or even how much of it is produced.  Supply of any good, whether it be clean air or Bounty bars, should continue up to the point where the extra marginal costs of supplying it exactly equal the extra benefits obtained.  The consequence of taking a decision based on average costs and average benefits is an all or nothing outcome;  but it is more likely that we would trade-off some benefits for some costs.

As suggested by Freeman, a more accurate replication of markets can be constructed using "choke-off" prices.  "Choke-off" prices are determined by attempting to construct a demand curve of the marginal benefits from pollution reduction.  People are asked what they would be prepared to pay for a range of marginal improvements in air quality.  Individuals' "willingness to pay" measures are summed for each marginal improvement.  The demand curve generated is a vertical aggregation of individual marginal valuations following the methodology set out by Samuelson (1954).  Vertical aggregation is required, in contrast to the normal method under which demand curves are summed horizontally, because pollution is a non-rival good -- one enjoyed or suffered by all.  The pleasure one person obtains from a clear view (in the absence of congestion) does not stop others from also obtaining satisfaction from it.

The synthetic demand curve may then be used to determine where the marginal costs equal the marginal benefit.  However, the methodology is still suspect, because respondents are not obliged to make real world choices within the constraints of their budgets.

In determining their choices of goods and services, people face a galaxy of options but are only able to satisfy a limited number of needs.  If, added to the air pollution questions in the Tolley and Randall study cited above, respondents would need to be asked about preservation of wildlife, forests, river purity, parkland in their neighbourhood and the whole host of other facets of life which could be considered externalities.  They should be confronted with the real trade-offs between these goods, and correspondingly fewer of the goods they would normally purchase.  Were such a study possible, the values measured would likely be only a tiny fraction of those assignable from seeking answers to single issues with no trade-offs involved.

Freeman expresses a scepticism about the non-use values reported in these studies.  He says:

At issue is whether the responses are measuring a true willingness to pay as defined in our basic theory of individual preferences or whether they are indicators of a general sentiment for environmental protection or preservation that is only imperfectly related to the willingness to commit resources in a true market or quasimarket setting.

These reservations have undoubted merit.  However, survey methods do allow values to be placed on particular resources by those seeking their preservation.  And, by placing upper boundaries on the amounts of resources available, more rational choices maybe possible.


APPROACHES TO REDUCING UNWANTED RESIDUALS

Almost all activities impose some cost or benefit on other parties;  and, though externalities have been the subject of a lengthy literature, the normal procedure (both for the community as a whole and in economic analysis) has been to neglect them.  To a considerable degree this corresponds with efficiency.  Arranging and monitoring contracts can be expensive;  to attempt to build ledgers of all cross-payments each of us owes and is owed would impose excessive costs and inflexibilities.

This explains one reason why the broad sweep of externalities has received only cursory attention in the past -- they have not much mattered.  Clean air was abundantly available.  Where externalities became important, as in the case of downstream pollution or noise, law developed to take them into account by adapting property rights.  Often the increased importance of such externalities has generated incentives for new techniques to be developed in order to control them better because of asset reductions.

Where unwanted outputs are to be reduced this can be accomplished by:

  • lower production of the good of which it is a by-product (this might entail changing the composition of national income so that resources are redirected to outputs having a lower level of deleterious by-products);
  • improved efficiency in producing the good so that fewer adverse side effects accompany its output;
  • recovery of residual materials and recycling them;
  • dilution of the "bad" so that its effect is less concentrated.  As Huber (1985) puts it "dilution may in fact be a very good control strategy.  As countless cancerous rats might attest, many things are harmful in large concentrations but innocent or even beneficial in small ones".

Bernstam (1989) demonstrates how the relationship between residuals and output is non-linear and varies over time and between economic systems.  Thus, in the US, between 1940 and 1970, prior to major efforts to reduce pollution, emissions increased by 30.1% while GNP increased by 212%.  From 1970 to 1986, total pollution declined (by one third), in part because of regulatory action.  In the main, however, both periods' trends were attributable to shifting composition of national income and technological improvement, spurred on by the ceaseless contest of competitive firms to improve their profitability -- one means to which is conserving use of material inputs.

Bernstam also shows that this same pattern is not in evidence in the Soviet Union, where emissions of air pollution in 1987 were more than twice those of the US, even though national income was probably less than one third that of the US and population only 17% higher.

In the Soviet Union, the economic system is driven by forces other than a profit based meeting of consumer needs at the lowest price.  As a result, two factors bring about a considerably higher level of waste and unwanted residues than in comparable market economies.

First, excessive inputs are allocated to capital production due, in part, to inefficient machinery.  Thus, the share of consumer goods in national expenditure has fallen from 60.5% in 1928 to 24.9% in 1987;  and capital investment's share is at least twice that of typical market economies, without growth compensating for this denial of immediate consumption.  In short, excessive resource inputs are spent on machines and residual outputs emerge while providing little contribution to well-being.

Secondly, the "command and control" mechanisms used in the absence of profit related measures must focus upon inputs:  the productive unit has its price controlled and its output levels established for it and the only way it can obtain a higher "profit" is to raise its production costs by requiring increased inputs.  In this way the firm is able to pressure the planners into lowering its production quotas and raising its output prices.

Use of market instruments to combat air pollution combines the power of individual self-interest with the best sources of information on how to reduce emissions to the levels sought at the lowest cost.  The firms and individuals who produce the emissions have the knowledge on how to reduce their outputs most economically.  They will seek to take opportunities for gain (or for reducing their losses).  In doing so, they ensure a more frugal use of resources in meeting standards than would be possible for regulatory authorities, whose information on economical means of reducing pollution cannot be as complete.  Because of their vested interest and operational familiarity, firms are likely to be much better informed about the available techniques for reducing residues in the most cost effective manner than are government officials.


PREFERRED APPROACHES

It is useful to categorise air pollution according to its three primary causes:  automotive, household energy generation and industrial facilities.  In each case the economist's solution would be to impose a tax or introduce tradeable rights.  The decisions between these and outright regulation of inputs should depend upon policing costs.  Clearly, such costs are greater with a multiplicity of sources.  Just as it makes sense for electricity authorities to strike separate deals with major users but charge generally available rates to domestic consumers, so it is appropriate for pollution controls to be tailored differently for minor, as opposed to major, source emission locations.  For the former, transaction costs of monitoring market based approaches may be prohibitive, given current technological capabilities.

Automotive and domestic sources are far more important than industrial sources, as Chart 9.6 shows with respect to Sydney (which is typical of other Australian cities).

Chart 9.6:  Sources of emissions in Sydney -- percentage of total emissions for each pollutant


INDUSTRIAL POLLUTION

In the case of industrial pollution, economies are available if trade of pollutants between different sources is allowed.  Crandell (1983), for example, found that the cost of controlling emissions from paper mills was three-fold that of controlling similar ones from metal working factories -- fewer pollutants could be achieved at the same social cost by concentrating on the latter.

Industrial pollutant trading can take a number of forms, including netting, offsets, bubbles and banking.  Netting sets emission standards for one business but allows trading within plants.  Offsets allow new pollutant sources if compensatory reductions can be obtained from other sources.  Bubbles are defined for a particular area and allow different pollutants within a given aggregate limit.  Banking enables credits to be earned for over-performance and subsequently used or traded.

Hahn and Hester (1987) explain how the EPA has allowed:

  • netting since 1974;
  • offsets since 1976;
  • bubbles since 1981;
  • and banking.

Their estimates of the effect of these measures, 1979 through 1985, are set out in Table 9.3.

Table 9.3:  Estimated effects of emission trading 1979-1985

BubblesOffsetsNettingBanking
Number13210008000100
Cost saving ($m)145n.a.4000small
Air quality impactneutralneutralslightly negativeslightly positive

They consider the effects to have been less than satisfactory because of arrangements by environmental groups which have thwarted some proposals and because of uncertainty by firms.  They attribute this latter effect to the need for EPA approval to be specifically granted and the discretion EPA has (and is thought likely to use especially with regard to bankable emissions).  Moreover.  since 1986 the EPA has insisted that bubble trades be "taxed" so that there is a net reduction in emissions of 20%.

These problems notwithstanding, the notion of emission trading as a cost effective interventionary tool is gaining increased currency.  Congress has agreed to a scheme which splits the US into two areas with unlimited trading of SO2 emissions permitted within each of them.

Hahn and Hester (1989) also estimate there to be 132 "bubbles" within which trading takes place, and these brought a cost savings of $435 million between 1979 and 1985 while having had a neutral effect on pollution levels.  These savings are in addition to savings of up to $12 billion estimated to have accrued from "netting" -- allowing firms flexibility to over-perform in some areas of a plant to compensate for under-performance in others.

Levin (1985) quotes some specific examples where these approaches have resulted in gains.  Dupont, facing a requirement to reduce emissions by 85% in each of 119 stacks, negotiated to reduce 99% of emissions in seven stacks which proved faster and over-achieved the aggregate goal at a saving of $12 million in capital cost and $4 million a year in operating costs.  General Electric was allowed to forgo $1.5 million in capital expenditure and $300,000 in operating costs, required to meet emission controls in Louisville, by negotiating with International Harvester which was able to over-perform mandatory requirements relatively cheaply.

The Pigovian approach, which for long had been preferred by economists, is to apply pollution taxes.  Like trading, this allows greater flexibility for firms in designating the appropriate means of meeting output levels.  It also allows compensation of the community at large for residuals in excess of those considered appropriate.

Buchanan (1988) sets the strict conditions under which an externality can legitimately be countered by the imposition of a tax He suggests that:

  • all persons must be equally damaged;
  • all must be consumers/buyers purchasing in equal quantities;
  • the revenue must be equally shared.

In such rare cases, he suggests, the price would rise so that the "bad" would be economised upon.

Buchanan maintains correctly that without his strict conditions for governmental action to combat an externality there will be distributional consequences -- consequences which will be determined by the political market, and generate social costs via lobbying and government failure.  Some redistribution is inevitable where any departure from equal usage and production occurs.

Others, for example Terkla (1984), take the view that effluent taxes improve welfare because they charge the polluter the true economic cost, and allow the replacement of other taxes which are designed to raise revenue and unintentionally distort economic choice and resource allocation.  Terkla suggests that the total revenue raised from an efficient tax, which in 1982 he estimated would optimally be set to raise between $1.8 and $8.7 billion, would generate considerable efficiency gains.  Based on Browning's estimates of the welfare losses generated by levying income tax, $0.35 per dollar collected, he estimated effluent taxes would raise welfare by $630 million - $3.05 billion.

Terkla's effluent taxis therefore seen as more efficient than the alternative means of raising revenue.  Like others, he sees merits of such a tax system in allowing the market to discover the most efficient means of adapting to a new incentive structure.  His estimates are based on effluent taxes not generating the sort of losses from work/leisure substitution which Browning's estimates project.  Hence, although many would dispute the predicted cost effectiveness, there is wide agreement that making use of the market in this way will generate economies.

In practice, taxation of residuals has not found favour.  Those facing the taxes have, of course, objected whilst environmental activists have opposed this policy approach because of the apparent endorsement it implies to the generation of pollution.

In addition, there are several practical difficulties in devising a workable taxation regime.  Buchanan (1988) points to one major difficulty.  The various parties are likely to have different interests.  Those producing, or using most intensively, the output of the polluting facilities are likely to wish to see the tax levied at as low a rate as possible;  those on whom the impacts of the residues fall most heavily are likely to favour a prohibitive level of tax;  taxpayers who are relatively indifferent to the pollution and the output the facilities' produce are likely to favour a tax which maximises the revenue raised so that other taxes might be reduced.  How are these differences to be reconciled?  The obvious gains will lead the parties to engage in wasteful lobbying exercises to promote their particular interests.  Can we be confident that governments will arbitrate dispassionately?

Pollution taxes are based upon the fundamental principle that the polluter pays.  Yet this tends to treat the polluter as the malefactor and the pollutee as the victim.  In fact there is a mutuality of interest.  The unowned resource did not belong to the pollutee in the first instance.  As soon as mankind breathes air some impression is made on the natural environment.  It is no more certain that the population surrounding a polluting factory has the rights to clean air than the owners of the factory have the right to soil it.  This statement is even more graphic in situations where the factory owner was there first and the "victims" moved in later (perhaps to take advantage of the opportunities to improve their well-being offered by locating close to the factory).

Industrial air pollution is more easily combated by granting tradeable rights to pollute.  Means of monitoring major sources of pollution are readily available as is the technology to effect this.  Hartley and Porter (1990) draw attention to the application of deuterated methane, a chemical tracer which mimics SO2 to detect sources of pollution in southern Utah.  Tietenberg (1990) assembles eleven empirical studies of market based approaches to pollution control compared with the "command and control" approach.  Each study estimates the cost of a "command and control" strategy limiting pollutants in a localised area.  These approaches are contrasted with the least cost allocative method involving either trading or taxes.  Table 9.4 gives the ratios of the "command and control" outcome to the estimated least cost allocative mechanism.

Table 9.4:  Empirical studies of air pollution control

StudyPollutants CoveredGeographic AreaCAC BenchmarkRatio of CAC Cost to Least Cost
Atkinson and LewisParticulatesSt LouisSIP regulations6.00 a
Roach et alSulphur dioxideFour corners in UtahSIP regulations Colorado, Arizona, and New Mexico4.25
Hahn and NollSulphates standardsLos AngelesCalifornia emission1.07
KrupnickNitrogen dioxide regulationsBaltimoreProposed RACT5.96 b
Seskin et al.Nitrogen dioxide regulationsBaltimoreProposed RACT14.40 b
McGartlandParticulatesBaltimoreSIP regulations4.18
SpoffordSulphur DioxideLower Delaware ValleyUniform percentage1.78
ParticulatesLower Delaware ValleyUniform percentage regulations22.00
HarrisonAirport noiseUnited StatesMandatory retrofit1.72 c
Maloney and YandleHydrocarbons DuPont plantsAll domestic reductionUniform percentage4.15 d
Palmer et al.CFC emissions from non-aerosol applicationsUnited States standardsProposed emission1.96

Notes:

CAC = command and control, the traditional regulatory approach.

SIP = state implementation plan.

RACT = reasonably available control technologies, a set of standards imposed on existing sources in non-attainment areas.

a Based on a 40 µg/m3 at worst receptor.

b Based on a short-term, one-hour average of 250 µg/m3.

c Because it is a benefit-cost study instead of a cost-effectiveness study, the Harrison comparison of the command-and-control approach with the least-cost allocation involves different benefit levels.  Specifically, the benefit levels associated with the least-cost allocation are only 82% of those associated with the command-and-control allocation.  To produce cost estimates based on more comparable benefits, as a first approximation, the least-cost allocation was divided by 0.82 and the resulting number was compared with the command-and-control cost.


As Tietenberg points out, the estimated savings are theoretical -- they are gains achievable on the basis that sunk costs have not been incurred, perfect information is available and multilateral trades take place.  Moreover, if emission credits are traded on a pollutant-by-pollutant basis, rather than on an amalgam of pollutants, the trades themselves are rendered considerably more complex Nonetheless, the wide number of studies (each of which demonstrates considerable gains from applying market principles) present powerful evidence against "command and control" methods.

There is little use made of economic instruments to control Australian industrial emissions.  This may change.  Both the New South Wales Government and the Commonwealth Treasury have placed on record their favouring of market based measures where these are possible.  Indeed, the New South Wales Government has announced its intention to place a greater priority on "pollution taxes and charges, pricing of services based on true costs, tradeable emission rights and government subsidies" (Greiner, 1990).

At the present time, however, pollution in Australia is combated only by "command and control" methods.  Some flexibility is provided for in certain circumstances.  Thus, the Victorian Government, in introducing more stringent requirements for the control of conveyor equipped coating lines in 1988, specifies input controls in detail but allows firms to meet the standard in other ways providing they are able to demonstrate that these achieve equivalent results.  Even in these cases, however, the regulations contain rigidities over and above the absence of provision for trading.  These include grandfathering provisions which discourage the replacement of equipment.


AUTOMOTIVE AND HOUSEHOLDS

It has previously been suggested that opportunities to make use of market mechanisms are limited.  Monitoring and other transaction costs may make even these partial market type approaches inapplicable for determining efficient household and vehicle pollution behaviour.

In the case of motor vehicles, governments the world over have introduced standards for emissions, more recently by seeking the use of lead-free petrol.  A general approach may be the rational solution, notwithstanding that in Australia, citizens of places like Albury Wodonga with little pollution would not obtain value from the increased capital and operating costs involved (the latter partly hidden by the governmental requirement that lead free petrol be cross subsidised by leaded petrol). (1)

The move to lead free petrol has contributed to a marked reduction in the level of lead in urban areas.  Thus, both in the centre of Melbourne (where it was previously at double the level set as acceptable) and in the suburbs, lead levels have exhibited a considerable decline.

Grenning (1985) is critical of Australian mandatory emission control standards adopted in Australia Design Rule 37 (ADR 37) during 1986.  He favours emission charges over the technical solutions introduced.  Grenning maintains that the standards adopted were overkill because:

  • any problem which occurs is confined to Sydney, and, to a lesser degree, Melbourne (which together on the widest interpretation, might account for 30% of the vehicle population);
  • pollution levels in these cities had began to decline anyway as a result of industry restructuring and relocation.

ADR 37 meant a cost per vehicle, at 1985 prices, of $70-160 which is in addition to a slightly higher impost introduced by the previous standard.  Important shortcomings of a standard like this are that they apply only to new vehicles -- and perhaps, therefore, to only 12% of the annual stock.  In addition, the increased cost (and reduced performance) creates disincentives to replace existing vehicles and therefore, to some extent at least, has perverse effects.  Furthermore, achieving the targeted output of emissions by using a "command and control" approach is far from certain.  It depends crucially upon the vehicles being properly maintained and is totally negated if owners disconnect the control mechanism.

Grenning favours a charge based on the outcome of emissions as measured at the annual vehicle test.  Although this would make use of more direct and effective controls, it would also have shortcomings:

  • the annual inspection is only a once per year measure and there will be ways discovered which would allow vehicles to demonstrate a short term measured acceptability in emissions;
  • it may entail greater costs if owners are required to have modifications undertaken retrospectively,
  • it entails some administrative costs, and if cars can be registered in areas where these additional costs are not required there will be considerable incentives for evasion.

Leaving aside the issue of whether or not mitigation of emissions specified for Australia was necessary, it is not clear that the "command and control" solution is inferior to the generally preferred output based solution in this particular case.

In the case of households, many locales have banned coal and wood burning, though the latter somewhat ironically has more recently shown an increase in popularity because it is thought to be a more natural fuel.  Prohibition would not be the preferred solution of most economists yet it may well be more efficient than imposing an easily evadable tax.  As with garbage, it is not easy to see how contractual difficulties can be overcome to allow tradeable rights to operate with respect to this source of air pollution.


RURAL POLLUTION

Aside from the issue of urban air pollution -- one which largely involves health and unpleasant smells -- there are issues of rural air pollution.  In the main these involve maintaining a pristine air quality.  Such issue shave not assumed any importance in Australia to date and major industrial sources of air pollution facilities in areas of high natural value are most unlikely to be economically justified.  A contemporary exception is the controversy over the location of a high temperature incinerator to service the southeast corner of the continent.


ENVIRONMENTAL REGULATORY COSTS

It is often pointed out that ecology and economics have much in common in so far as both start from the premise that everything is interconnected.  Many point to a comity of interests between the two frameworks.  Some, like Hamrin (1981) suggest that the application of environmental standards on emissions required by government regulations will actually benefit both the environment and the economy by saving energy, virgin resources, and so on.  Such assessments glide over the costs in terms of resources ards.

Others, more conventionally, suggest that such a comity crisis, and would be the natural outcome of market forces if property ownership rights could be adequately defined to prevent excessive use of "unowned" resources and the consequent externalities generated.  Fred Smith (1989) goes further than this and maintains that modern technology can allow all externalities to be internalised.

The mainstream view is that for some goods the externality looms so large and the difficulties of internalising it are so great that interventions by government are essential.  Such interventions can only be legitimate where they are based upon the construction of shadowmarkets for evaluating the worth of those activities where externalities inhibit provision by natural markets.  Although there have been no analyses of the aggregate costs of environmental regulation in Australia, a number of studies of the costs of air and water pollution control have been conducted in the US.  These include many estimates of the costs of environmental protection on economic growth by Crandell;  Christainson and Haveman;  and Conrad and Morrison.  But perhaps the most rigorous has been that of Hazilla and Kopp (1989).  Taking the Environmental Protection Agency's (EPA) cost estimates of federally mandated pollution controls ($425 billion in 1981 dollars), $648 billion in 1981-1990 current dollars), the authors apply elasticities of substitution, both to the economy's outputs and inputs.  Because of substitution, the initial estimates are lower than those derived from the EPA's engineering based estimates.  Both consumers and producers take actions to alleviate the cost burden which regulation imposes, for example, by switching purchases to goods which do not have additional cost requirements.  In this way the aggregate cost imposition is muted.  But the dynamic, secondary impacts of these costs must also be factored in.  In addition, the effects of the intervention cannot be confined to one particular time period but will flow on into subsequent periods.  The resulting costs from making these adjustments are estimated at $977 billion, which by 1990 translates into a diminution of real GNP of 5.9% and of investment by 8.4%.

This approach, however, tends to take preferences and technical capabilities as given.  In fact, both consumers and producers can make rapid adjustments.  And over time alternative needs and new means of meeting them are found while, both supply and demand curves for a particular good tend to become flatter.

In the case of consumers, for example, the introduction on congested bridges of express lanes which may only be used by multiple occupancy cars brought considerable behavioural changes both in Sydney and in San Francisco.  Consumer adjustments to picking up or accepting rides from total strangers were remarkably swift and it is difficult to argue that the costs estimated at the outset prevail in anything like their original magnitude after a short transition period.  On a larger scale, adjustments following the implementation of major infrastructural changes within cities -- changes which were envisaged to affect property values markedly -- have been absorbed without lasting declines in these values.  For example, converting vehicular roads within cities to pedestrian malls has often brought very rapid behavioural changes on the part of shoppers which were unanticipated and which totally negated the adverse effects previously expected.

For producers, the very rapid adjustment of some industries to the four-fold increase in oil prices, which took place in the 1970s, reveals great flexibility.  The Japanese steel industry converted from oil firing to making more efficient use of coal, a formidable energy-saving innovation.  Entrepreneurial reactions like this cannot be incorporated into general equilibrium models except by using non-scientific "fudge factors".  Indeed, the inability to account for the role of the entrepreneur in seeking out opportunities constitutes perhaps the greatest shortcoming of all economic modelling.

Because of compensatory variation, it is probable that the economic analysis of environmental quality regulation overstates the costs to society.

There are, however, other factors which would tend to operate in the opposite direction.  One is that entrepreneurship itself is a scarce resource and energies directed at ameliorating an intervention might be energies which would otherwise be directed at discovering new means of adding value.  In addition, modelling typically assumes zero transaction costs, perfect factor mobility and other notional attributes which we tend to group under the heading of perfect markets.

The work of Hazilla and Kopp, originally commissioned by the EPA, has a strong following within the agency, even though its findings have not been formally endorsed.  Publicly the EPA quotes a more conservative, less comprehensive cost of environmental interventions which amounts to only 1.7% of GNP.  Nonetheless, the Hazilla and Kopp work constitutes the state-of-the-art in estimating environmental costs and, because it examines the total picture, is superior to those estimates which confine their impacts to specific sectors.

The costs of environmental regulation estimated by Hazilla and Kopp incorporate only the costs of those regulations falling under the control of the US Environmental Protection Agency.  These cover air and water pollution and waste disposal.  They do not include other regulatory interventions which fall within the environmental embrace, such as use of forests and wilderness, protection of flora and fauna and measures to combat soil erosion.


CONCLUDING COMMENTS

The magnitude of the costs involved in effecting pollution control makes the means by which this is undertaken of considerable importance to general well-being and not least to industrial competitiveness.  Compared with other countries, Australian levels of pollution are low -- in part because our cities tend to have fewer industrial facilities and their populations are less concentrated.

In addition, pollution levels have been reduced, notwithstanding industrial growth and far greater numbers of automobiles.  This outcome, welcome as it is, is the result of "command and control" policies.  Such approaches may well be unavoidable and the most effective means of combating the minor source domestic and automobile emissions which together account for the preponderance of urban pollution.  They have been demonstrated not to offer the lowest cost strategies for control of emissions from major industrial sources.  Australian authorities, however, have not sought to apply market-based solutions, except in limited cases where offsets within major sources have been negotiated.



REFERENCES

Bernstam, M.S. (1989) "Productivity of Resources, Economic Systems Population and the Environment":  Is the Invisible Hand Too Short or Crippled?, Centre of Policy Studies, Monash University.  To be included in Davis, K. and Bernstam, M.S. (1990) (eds.).  "The Endless Frontier and Resources", Cambridge Univ. Press, New York.

Buchanan, J.M. (1988) "Market Failure and Political Failure", Cato Journal 8, 1: 1-14.

Coase, R.H. (1960) "The Problem of Social Cost", Journal of Law and Economics, 3: 1-44.

Crandell, R.W. (1983) "Controlling Industrial Pollution", Brookings, Washington.

Freeman, M., Haveman, R.H. and Kneese, A.V. (1973) "The Economics of Environmental Policy", Wiley, John, N.Y.

Greiner, N.F. (1990) "The New Environmentalism".

Grenning, M. (1985) "Australian Motor Vehicle Emission Policy A Costly Mistake", CEDA Monography, 80, Melbourne.

Hahn, R.W. and Hester, G.L. (1987) "The Market for Bads", Regulation, 3, 4, pp48-53.

Hahn, R.W. and Hester, G.N. (1989) "Marketable Permits", Ecology Law Review, 16, 2, pp361-406.

Hamrin, R. (1981) "Environmental Quality and Economic Growth", Council of State Planning Agencies, Washington.

Hartley, P.R. and Porter, M.G. (1990) A Green Thumb for the Invisible Hand, Tasman Institute, Melbourne.

Hazilla, M. and Kopp, R.J. (1989) The Societal Cost of Environmental Quality Regulations:  A General Equilibrium Analysis, Resources for the Future.

Huber, P. (1985) "The I Ching of Acid Rain", Regulation, September-November.

Levin, M.H. (1985) "Building a Better Bubble at EPA", Regulation, March-April pp33-42.

Pole, N. (1973) "An Interview with Paul Ehrlich", The Ecologist, 3, l: 18-24.

Samuelson, P. (1954) "The Pure Theory of Public Expenditure", Review of Economics and Statistics, 36: 387-9.

Smith, F. (1989) Environmental Policy:  A Free Market Proposal, p.32-37 Tulanian.

Terkla, D. (1984) "The Efficiency Value of Effluent Tax Revenues:, Journal of Environmental Economics and Management, 2, pp107-123.

Tietenberg, T.H. (1990) "Economic Instruments for Environmental Regulation", Oxford Review of Economic Policy, 6, l, pp17-33.

Tolley, G.S. and Randall, A., Establishing and Valuing the Effects of Improved Visibility in the Eastern United States, Report to the US EPA.

Verall, F.N. and Simpson R.W. (1988) "Trends in Ambient Air Quality in Brisbane", paper presented to ANZAAS.



ENDNOTE

1.  Interestingly, however, a survey about environmental concern conducted by the Australian Bureau of Statistics (Cat. No. 4115.0) listed concern about pollution as being highest in the two Australian regions, the Australian Capital Territory and the Northern Territory, where problems in this regard would be much less evident than elsewhere.  This may reflect the preferences of people living in those two regions.  It may also reflect a heightened awareness about environmental matters generally, a possibility which is perhaps corroborated by generally enhanced levels of concern registered about other environmental issues including, nuclear power, nature conservation, old growth and rain forests, soil erosion, and water salinity, etc.