Current Issues
ACKNOWLEDGEMENTS
I am very much indebted to the work of Dr J.R. Johnstone on the previous NHMRC Report (1986). I am also indebted to a variety of writings by Dr G. Gori which have substantially enhanced my understanding of epidemiology. Finally, I am most appreciative of the suggestions of the reviewers of this paper, several of which have improved its clarity and the force of its arguments.
EXECUTIVE SUMMARY
The NHMRC's Draft Report, The Health Effects of Passive Smoking, can be seen as an attempt, no doubt in the interests of what the authors take to be a good cause, to corrupt the public policy process with respect to passive smoking through the use of either bad science or corrupt science.
BAD OR CORRUPT SCIENCE
Bad science is incompetent science, science that has unconsciously failed to do its work properly in some essential sense. Corrupt science, on the other hand, is bogus science, science that knows that its data misrepresent reality and its processes are deviant, but nonetheless attempts to pass itself off as genuine science.
There are two major sorts of problems within the NHMRC Report that suggest the label incompetent or corrupt science: (A) problems about the evidence and its use and (B) problems about the logic of drawing conclusions from what evidence there is.
(A) Problems of evidence:
- the NHMRC Report does not provide its readers with a clear understanding of the status of epidemiology, the special way in which "cause" is used in epidemiology, the problems that plague the epidemiology of multifactoral diseases, or the nature of the risk assumptions used within the Report.
- the NHMRC Report makes substantial and uncritical use of the United States EPA's own report of 1992. The US EPA's report is critically flawed in a number of ways:
- At the level of substance: the EPA was selective in its use of the available data, used domestic studies to justify workplace bans while ignoring the available workplace studies, conducted a methodologically dubious "meta-analysis" on only 11 of the 30 available studies, and worse still, employed a novel confidence interval to obtain its results -- a confidence interval without which it could not have reached the "right" conclusion.
- At the process level: the EPA promoted health messages that ignored the incomplete or ambiguous nature of the scientific evidence, violated its own guidelines for carcinogenic risk assessment, violated standards of objectivity by utilising anti-smoking activists, produced a policy guide before the assessment was finished, excluded two studies that would have negated its results, misrepresented the results of two important scientific papers, and ignored several other studies which would have confounded the EPA's results.
- the NHMRC Report does nothing to correct the errors and omissions of its previous report on passive smoking, produced in 1986. In particular, it fails to acknowledge scientific criticism of some of the key studies it relied upon in the 1986 report.
- the NHMRC Report misrepresents the findings of three of the four most recent large studies when it suggests that "the evidence that passive smoking causes cancer is strong". Worse, even on the Report's criteria for assessing causal links, its conclusions do not follow from the evidence.
(B) Problems of logic:
- In terms of policy, the NHMRC Report is incoherent -- even if one accepted its scientific findings as real. For example, the largest Risk Ratio that the Report cites is for non-smoking wives of smokers in domestic settings (RR = 1.31). This weak risk would, at best, justify some public policy recommendations for residential settings. The Risk Ratio for workplace settings is even weaker (RR = 1.01). Yet without any further evidence, the Report claims that the nature of the hazard "is likely to be similar in all enclosed settings".
THE CONSEQUENCES FOR SOCIETY AND FOR DEMOCRATIC PUBLIC POLICY
Using incompetent or corrupt science creates a number of problems for society and public policy. At the general level, such science raises a number of ethical concerns:
- Legitimising misrepresentation in the service of a "good cause" is a difficult practice to restrict once it becomes the norm.
- Corrupt science stifles dissent and the free exchange of ideas -- features vital to healthy science and democratic life.
- Corrupt science hurts otherwise innocent people. Without the corrupted science there is no "harm-to-others" justification for limiting smokers in the enjoyment of their pleasure. The only alternative grounds would be blatantly paternalistic.
- In particular, there are no grounds for treating smokers as mere means to health promoters' ends; nor for making smokers into social outcasts in an attempt to build up a constituency for a "smoke-free" society. In short, corrupt science erodes the normal standards of respect and tolerance for individual difference, and poses a threat to that most fragile aspect of social capital -- trust.
Corrupt science also creates disastrous consequences for democratic public policy. Viewed against a backdrop of core liberal-democratic values -- diversity, respect for the priority and autonomy of individuals, the paramountcy of reason, and the virtue of fairness -- the use of corrupted science to shape public policy:
- Undermines the status of science as an objective and rational undertaking and in doing so diminishes the usefulness of science as a means to shape public policy.
- Threatens the wider processes of rationality -- those that rely upon compelling argument based upon solid evidence -- which are at the very basis of all legitimate democratic public policy-making.
- Damages the credibility that can and should be given to risk assessments in all areas of public policy -- not simply passive smoking.
- Compromises the fundamental requirements of fairness by its selectivity of evidence and its intolerance of dissent.
- Stigmatises, excludes and degrades innocent members of civil society for not living or thinking along approved lines.
- Diminishes the core values of autonomy, respect and diversity in its paternalistic presumption that some other value (for example, health promotion) is more important than the truth.
In short, the NHMRC Draft Report's "only usefulness in the public policy process is as an example of how not to produce legitimate, democratic public policy".
SMOKESCREEN (1)
Another kind of credibility is more worrying. This is rigid, insensitive application of scientific rigour that disregards the weight of circumstantial evidence, calling into question the validity of epidemiological findings when it is not in the public interest to do so.
John Last (2)
For if the trumpet give an uncertain sound, who shall prepare himself to the battle? (1st Corinthians 14.8) The scientific "yes, but" is essential to research but for modifying the behaviour of the population it sometimes produces an "uncertain sound" that is all the excuse needed by many to cultivate and tolerate an environment and lifestyle that is hazardous to health.
The Hon. Marc Lalonde (3)
The Health Effects of Passive Smoking, the Draft Report (4) of the NHMRC Working Party represents a careful and sophisticated development of the principles of two Canadians, Professor John Last, a distinguished epidemiologist, and Marc Lalonde, a distinguished former Minister of National Health and Welfare. Professor Last's principles are taken from his plenary address to the International Epidemiological Association, while those of Minister Lalonde's are taken from a 1974 document, "A New Perspective on the Health of Canadians". Together, Last and Lalonde provide the unstated but nonetheless pervasive "logic" that binds together the Report: namely, the scientific "yes, but" and the "insensitive application of scientific rigour" must not be brought to bear in the public policy process about Environmental Tobacco Smoke (ETS), as it is "not in the public interest" to do so. Put most directly, what the Report appears to be is an attempt, no doubt in the interests of what the authors take to be a good cause, to corrupt the public policy process with respect to ETS through the use of either bad science or corrupt science.
This response to the Report is divided into two sections. In the first section we outline what we mean by the claim that the Report is an instance of either bad science or corrupt science, and in the second section we examine the consequences of the use of such instances of bad or corrupt science on the public policy process.
1. BAD OR CORRUPT SCIENCE
At the outset it is important to define what we mean by bad science and corrupt science. By bad science we mean incompetent science, science that has unconsciously failed to do its work properly in some essential sense. Corrupt science, on the other hand, is bogus science, science that knows that its data misrepresent reality and its processes are deviant, but nonetheless attempts to pass itself off as genuine science. It is science that has an institutionalised motivation and justification for allowing ends extrinsic to science to determine the findings of science, for allowing science to be subject to an agenda not its own, for allowing science to tell lies with a clear conscience. It is essentially science that wishes to claim the public policy advantages of genuine science without conforming to the scientific process or doing the work of real science. There are at least four characteristics that mark corrupt science off from simply incompetent science. First, corrupt science is science that moves not from hypothesis and data to conclusion but instead from mandated/acceptable conclusion to selected data back to mandated/acceptable conclusion. That is to say, it is science that fundamentally distorts the scientific process through using selected data to reach the "right" conclusion, a conclusion that by the very nature of the data necessarily misrepresents reality.
Second, corrupt science necessarily misrepresents the state of what it seeks to explain. Rather than acknowledging alternative evidence, or problems with its evidence that would cast doubt on its conclusiveness, and rather than admitting the complexity of the issue under review and the limits of its evidence, corrupt science presents what is at best a carefully chosen partial truth as the whole truth necessary for public policy. Third, corrupt science not only misrepresents reality but it also misrepresents its own processes in arriving at its conclusions. Instead of acknowledging the selectivity of its process and the official compunction of demonstrating the correct pre-determined conclusion, it invests both its process and its conclusions with a mantle of indubitability. Fourth, and perhaps most importantly, whereas good science deals with dissent on the basis of the quality of its evidence and argument and considers ad hominem argument as inappropriate in science, corrupt science seeks to create formidable institutional barriers to dissent through excluding dissenters from the process of review and contriving to silence dissent not by challenging its quality, but by questioning its character and motivation.
These four characteristics manifest themselves in a variety of ways which include: claiming that a statistical association is a causal relationship; a highly selective use of data; a highly selective practice of citation and referencing; claiming that a risk exists regardless of exposure level; assuming that a large number of statistically non-significant studies constitute a significant evidentiary trend; claiming that a series of inconclusive or weak studies justify a strong conclusion; suggesting that weak evidence warrants decisive regulatory action; being unwilling to consider seriously non-conforming data; failing to contextualise risk in terms of its significance; implying that the status of an authority justifies its evidence and policy recommendations; suggesting that the mean between two Risk Ratios (RRs) is the reasonable real RR; (5) claiming that a finding based on one population is necessarily true of a different population; confusing the roles of public policy advocate and scientist; suggesting that certain risks need not conform to the normal public policy process; advancing public policy measures as "scientifically justified" without respect to potential efficacy.
There are at least two major patterns of problems within the Report that suggest the label incompetent or corrupt science: the Report's understanding of what counts as evidence and indeed its use of evidence, and the Report's understanding of logic, particularly coherence and consistency.
Let us begin with the problems of evidence. It is remarkable that a document published by a body that "aims to produce high quality, evidence-based information and advice for use in a range of settings" (6) would proceed without providing its readers with any clear understanding of what its authors take to be the status of the central scientific discipline, namely epidemiology, upon which the weight of its evidence is based. It is equally remarkable that the authors would produce a document that provides no clear position on the assumptions that are employed in its risk assessments; for example, does it use conservative assumptions and default options? These two issues, one about the status of epidemiology in general and the other about risk assessment assumptions in particular are not academic trifles inasmuch as they go to the heart of two quite central issues -- one the issue of whether ETS really does pose a significant public health problem, and the other the issue of whether the authors are prepared to be forthright about how certain risk assessment assumptions can conceal the uncertainties of the data.
What the authors of the Report appear to believe is that ETS causes lung cancer based on the weight of the epidemiological evidence. (7) For instance, the Report argues that "the evidence that passive smoking causes lung cancer is strong -- it is biologically plausible, reasonably consistent, and demonstrates dose-response relationships". (8) But apart from the specific question of whether the evidence adduced in the Report can justify this conclusion of causality, there is the prior question of whether epidemiology can support any causal conclusions. As McCormick and Skrabanek have noted in their Follies and Fallacies in Medicine:
It is not uncommon for epidemiological data regarding associations to be abused by assuming that an association implies causation. This is particularly likely in the case of diseases of unknown cause. In modern epidemiology, the concept of "cause" has been replaced by statistical associations with so-called risk factors. As Stehbens points out, risk factors, such as high levels of cholesterol in the blood, are not causes of coronary heart disease, but associated phenomena such as cough, shortness of breath, or fever in pneumonia. (9)
It is certainly arguable that the question that the Report seeks to answer is what has been called a trans-scientific question, a question that goes beyond the powers of science. (10) As Ken Rothman has noted:
Despite philosophic injunctions concerning inductive inference, criteria have commonly been used to make such inferences. The justification offered has been that the exigencies of public health problems demand action and that despite imperfect knowledge causal inferences must be made. (11)
Indeed, this "exigencies of public health" approach is precisely the one taken by the Report, (12) and in doing so it fails both to warn its readers of the inherent and perhaps insurmountable limitations of epidemiological evidence and to provide any convincing rationale as to why it should make a "judgement about cause and effect". (13) The Report's reasoning on this point seems utterly specious. For one thing the Hill guidelines (14) might provide epidemiological "causation" but this begs the question as to whether this is genuine causation because the reasoning is transparently circular: the canons of epidemiology guarantee causation, therefore causation obtains. Second, the Report's conclusions do not meet even the Hill criteria of causality, thus there is no justified action that "must be taken to protect health". (15)
The larger issue, surely, is whether it is proper to use the word "cause" in this instance at all, particularly when the word, at least in the mind of the public, carries a meaning that appears to confound its epidemiological use?
For instance, when we say that Robert entered the room, pulled out a gun and shot Sam dead, and conclude that Robert caused Sam's death, we mean "caused" in the sense that A caused B to happen. Causality in this sense is direct and certain. The causation of epidemiology is, however, not the causation of "A caused B" but the "causation" of statistical association, which can be something quite different. It is crucially important therefore for a Report of this sort, one which offers policy advice to both policy makers and the general public, to be precise about the limitations of its central evidentiary tool. To this end, the Report should make it clear that its use of the word "cause" has two caveats. First, the word is not used in the normal sense of cause -- scientists have not watched ETS enter the lungs of any non-smoker and cause lung cancer. Second, the epidemiological hypotheses of the possible associations between multifactoral diseases and low-level environmental exposures are extremely difficult to establish, as opposed to the epidemiological hypotheses about infectious diseases. As the late Petr Skrabanek has noted:
The main preoccupation of epidemiologists is now the association game. This consists of searching for associations between "diseases of civilisation" and "risk factors". The "diseases of civilisation" are heart disease and cancer. ... The "risk factors" studied by epidemiologists are either personal characteristics (age, sex, race, weight, height, diet, habits, customs, vices) or situational characteristics (geography, occupation, environment, air, water, sun, gross domestic product, stress, density of doctors).
Important associations, such as liver cirrhosis or Korsakoff's psychosis in alcoholism, retinopathy or foot gangrene in diabetes, aortic lesions or sabre tibias in syphilis, lung cancer in uranium ore miners, bladder cancer in workers with aniline dyes, are not discovered by epidemiologists but by clinicians, and they are not called "associations" but the manifestations, signs, or complications of diseases which are their causes. (16)
This sort of scepticism about the ability to verify the hypotheses of epidemiology with respect to multifactoral diseases is shared by some of the discipline's most eminent practitioners. For instance, Sir Richard Doll himself noted that: "Epidemiological observations, however, also have serious disadvantages that limit their value. First, they can seldom be made according to the strict requirements of experimental science, and consequently the available observations may be open to a variety of interpretations". (17) The failure of epidemiology to meet the "strict requirements of experimental science" lies in the fact that epidemiology does not always proceed on the basis of randomised trials, which are the unquestioned standard for drug studies and other medical research. As Gary Taubes, writing last year in an article in Science on the limits of epidemiology, observes: "Assign subjects to random test and control groups, alter the exposure of the test group to the suspected risk factor, and follow both groups to learn the outcome. Often, both the experimenters and the subjects are 'blinded' -- unaware who is in the test group and who is a control". (18) But the "strict requirements of experimental science" cannot be used for most risk factors, not simply because of the length of time that would take and the corresponding expense, but because of the significant ethical objection of exposing healthy subjects to suspected risks. This means that the epidemiology of multifactoral diseases must proceed on the basis of observational studies which are either case-control studies or cohort studies. Case-control studies select a population that has a particular disease, select a population that does not have the disease and then systematically compare the two groups for differences. Cohort studies take a large population, interview them about their lifestyle and environment and then follow the populations for a period of time in order to determine who gets which diseases.
But these approaches are themselves open to a series of other problems -- namely, confounders (the unobserved variables in the populations being studied which are not controlled for) and biases (the problems inherent in the designs of the studies). For many epidemiologists, these create formidable if not insurmountable problems. Alvin Feinstein of Yale observes: "In the laboratory you have all kinds of procedures for calibrating equipment and standardising measurement procedures. In epidemiology ... it's all immensely prey to both the vicissitudes of human memory and the biases of the interview". (19) Again, as Harvard's Alex Walker suggests: "I have trouble imagining a system involving a human habit over a prolonged period of time that could give reliable estimates of [risk] increases that are of the order of tens of percent". (20) These conclusions are supported by Gio Gori, President of the International Society of Regulatory Toxicology and Pharmacology, who notes that:
In studies of open populations many factors trigger, facilitate, or impede particular outcomes, and introduce extreme difficulties in reaching specific causal attributions. In current practice, these difficulties are circumvented by ignoring the multiplicity of field variables, and by presuming that only the few of contingent interest matter. Adding uncertainty, most studies of multifactoral conditions are not randomised experiments but observational surveys that are virtually impossible to replicate under equal conditions. Such observational studies are untestable by the standard rules of the scientific method and their reports could not be objectively validated. ... The ethics of science negates causal statements of fact if experimental predictivity is absent. (21)
I am not suggesting that epidemiology is not a science, only that the epidemiology of multifactoral diseases is plagued with substantial problems which rule out the use of claims about causation, and about which the readers of the Report deserve to be informed.
But it is not merely that the Report's authors do not forthrightly share their epidemiological assumptions, alert the reader to the methodological problems surrounding the epidemiology of multifactoral diseases, or indicate the special sense in which "cause" functions in current epidemiology; it is also that they do not explicitly expose their risk assessment assumptions, even though such assumptions do function to determine the Report's directions. For instance, the Report seems to proceed from conservative assessment assumptions; that is, it is intentionally biased towards finding more risk. Moreover, it is also driven by a common default option assumption that in the absence of complete scientific knowledge it will fill in the gap between scientific knowledge and public policy. (22) In this case, the Report has assumed that in the absence of convincing evidence to the contrary, it will proceed as if there is no threshold of ETS-caused cancer and that the dose-response is linear. (23) Only such a default option could explain the Report's policy recommendations for dealing with ETS. Yet this default option is an assumption: it is not scientific knowledge because it cannot be confirmed or disproved by science. (The default option does appear to be inconsistent with what the majority of cancer specialists believe that science knows about carcinogenesis.) The difficulty with such a position is that it significantly lessens the scientific credibility of the Report inasmuch as it builds into the Report a non-scientific assumption. Additionally, by avoiding the issue of conservative assumptions and default options, the Report avoids the most central issue of risk assessment.
Taken together, the unsatisfactory and misleading discussion of the limits of epidemiology with respect to causation and the incorporation of significant bias through conservative risk assumptions and default options, tend to suggest not bad science but something closer to corrupt science. It is closer to corrupt science because it misrepresents the complexity of the issue and the nature of the evidence about it through failing to acknowledge both its biases and the inherent uncertainties of the risk assessment exercise itself.
The second evidentiary problem, which again suggests the Report's proximity to corrupted science, is its uncritical use of the US Environmental Protection Agency (EPA) 1992 Report. The Report refers to the EPA work as the "most extensive recent review of the evidence of passive smoking and lung cancer". (24) Quite surprisingly, the Report fails to mention the extensive criticism directed at the EPA's ETS analysis, or the fact that the EPA's findings are the subject of court action. (25) Given the uncritical attention devoted to the EPA's work, it is worth noting precisely how strong the evidence is which suggests that the EPA's science on ETS is corrupt science.
The evidence that the EPA science on ETS is corrupt science falls into two categories: evidence about the substance of the science, and evidence about the processes involved in creating and using the science.
A. THE SUBSTANTIVE ISSUE
The EPA report, Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders, (26) claims that "Based on the weight of the available scientific evidence, the U.S. Environmental Protection Agency has concluded that the widespread exposure to environmental tobacco smoke in the United States presents a serious and substantial public health impact". Is this, in fact, the case? In order to answer this question one must first know something about the data on which the EPA decision is based. The EPA report refers to the 30 epidemiological studies on spousal smoking and lung cancer that have been published from 1982-1990. It is important to note that while EPA Administrator Reilly in referring to the report spoke about ETS and cancer in children and in the workplace, and though the report has been used as a basis for demanding smoking bans both in public places and in workplaces, the EPA did not examine those studies that look at workplace ETS exposure. The overwhelming majority of those workplace studies do not find a statistically significant association between exposure to ETS and lung cancer in non-smokers: a fact that by itself destroys the legitimacy of any harm-based demand for public or workplace smoking bans.
Thus, to begin with, the EPA's case is based not on workplace or public place ETS exposure, but on the risks of non-smoking spouses contracting lung cancer from their smoking spouse. But what of the 30 epidemiological studies? Those 30 studies come from different countries and vary substantially in size. Some have fewer than 20 subjects, others are based on larger populations, with the largest study involving 189 cancer cases. Of the 30 studies, 24 reported no statistically significant association, while six reported a statistically significant association, that is, a positive relative risk for those non-smoking spouses. Now, relative risks are further classified into strong risks or weak risks depending on their magnitude. Within the 30 studies on ETS and lung cancer none reported a strong relative risk. Moreover, whenever the assessment of relative risk is weak there is a substantial possibility that the finding, the assessment, is artificial rather than real. That is to say, there is a strong likelihood that even the weak relative risk is a reflection not of some real world risk, but of problems with confounding variables or interpretative bias. There are, for instance, at least 20 confounding factors ranging from nutrition to socioeconomic status that have been identified as important to the development of lung cancer. Yet none of the 30 studies attempts to control for all of these factors. So in assessing the global scientific evidence about ETS and lung cancer, the crucial conclusion is that none of the studies reports a strong relative risk for non-smokers married to smokers.
Now the EPA report discusses all of these 30 studies, but it limits its statistical analysis to only 11 U.S. studies of spouses of smokers. What do the 11 studies show? Of the eleven, 10 reported no statistically significant association between ETS exposure and lung cancer while only one reported a statistically significant association. The EPA analysis of these 11 studies claims that they show a statistically significant difference in the number of lung cancers occurring in the non-smoking spouses of smokers such that they suffer 119 such cancers compared with 100 such cancers in non-smoking spouses of non-smokers. It is this finding of statistical significance, a finding based on only the 11 U.S. studies, 10 of which found no significant association, that is the basis for the EPA decision to classify ETS as a "Group A" carcinogen. (27)
In order to arrive at its "conclusion", the EPA combined the data from the 11 studies into a more comprehensive data assessment called a "meta-analysis". Meta-analysis is governed by its own rules, as not every included study is a candidate for such combined analysis. In general, meta-analysis is appropriate only when the studies being analysed together have the same structure. The difficulty with the EPA's use of meta-analysis of the 11 ETS studies is that it has failed to provide the requisite information about the structure of the 11 studies, information crucial for an independent assessment of whether the studies are indeed candidates for meta-analysis. Thus, the EPA conclusion is based on a meta-analysis that is difficult, if not impossible, to verify.
But even more crucial to the question of assessing the quality of the EPA's ETS science is the issue of confidence intervals, for even by limiting its analysis to only 11 studies, and even by lumping these studies together through a meta-analysis, the EPA could not have achieved the "right" result if it had not engaged in a creative use of "confidence intervals". Essentially, confidence intervals express the likelihood that a reported association could have occurred by chance. The generally accepted confidence interval is 95%, which means that there is a 95% confidence that the association did not occur by chance. Inasmuch as most epidemiologists use the 95% confidence interval, the EPA itself, until the 1992 ETS report, always used this interval. Indeed, every one of the individual ETS studies reviewed by the EPA used a 95% confidence interval. Curiously, the EPA decided that in this instance it would use a 90% confidence interval, something that effectively doubles the chance of being wrong. Without using this 90% standard, the EPA could not have found that the 11 U.S. studies were "statistically significant". Without employing a novel standard, without, in effect, changing the accepted rules of epidemiological reporting, the EPA result, already painfully coaxed into existence, would not have existed, and ETS could not have been labelled a "Group A" carcinogen.
Thus, despite all of its careful selection of the right data, its meta-analysis and finally its relaxed confidence intervals, the conclusive point remains, as Huber, Brockie, and Mahajan had already noted in Consumers Research in the United States, that "No matter how the data from all of the epidemiological studies are manipulated, recalculated, 'cooked', or 'massaged', the risk from exposure to spousal smoking and lung cancer remains weak ... No matter how these data are analysed, no one has reported a strong risk relationship for exposure to spousal smoking and lung cancer." (28)
B. THE PROCESS ISSUES
While a careful look at the substance of the EPA's ETS claims clearly shows why this science can be called nothing other than corrupt science in that it uses highly selected data, data that are then further manipulated in breach of accepted scientific norms, all without cogent explanation, to reach the "right" conclusion, an examination of the process underlying this "science" demonstrates even more clearly its wholly corrupted character. There are at least nine specific process issues worth noting, each of which highlights a slightly different dimension of the corrupted character of the EPA's ETS science.
First, EPA science issues from a health promotion perspective that finds its conceptual home in the Lalonde Doctrine as propounded by former Canadian Minister of National Health and Welfare, Marc Lalonde. Lalonde argued that health messages must be vigorously promoted even if the scientific evidence was incomplete, ambiguous, and divided. Health messages must be "loud, clear and unequivocal" even if the evidence did not support such clarity and definition. What we have in the EPA is simply the Lalonde Doctrine as an institutionalised process. Clearly, the substance of the ETS data does not support its "Group A" carcinogen status, nor does it support public and workplace smoking bans on the grounds that ETS threatens the health of non-smokers. But the substance of the ETS data is to be ignored because the Lalonde Doctrine places the process of using such substance ahead of the substance itself; indeed, it requires that the substance be portrayed as something that it is not in order to further the health agenda. What this inevitably does is to build into the heart of the scientific enterprise an institutionalised motivation and justification for allowing ends extrinsic to science to determine the findings of science, for allowing science to be subject to an agenda not its own, for allowing science to tell lies with a clear conscience. Once one has come to see science as something which of necessity happens within the context of health promotion, then the process corruptions of the EPA follow quite "naturally".
This explains why, at one level, those involved with the EPA decision on ETS are quite frank about their process. For instance, an EPA official responsible for the revised ETS risk assessment was quoted as admitting that "she and her colleagues engaged in some fancy statistical footwork" to come up with an "indictment" of ETS. (29) (The footwork to which she referred is the novel 90% confidence interval.) Or to take another process example, the Science Advisory Board which reviewed the initial draft risk assessment on ETS, and found the case against ETS based on its association with lung cancer to be unconvincing, actually urged the EPA staff to attempt to "make the case" against ETS on the basis of the similarities between ETS and mainstream smoke. (30) To be fair, the consequences of the Lalonde Doctrine are not confined to the EPA's anti-smoking agenda. For instance, an article in the Journal of the American Medical Association in July 1989 reported a study that claimed to show a link between ETS exposure and increased risk of cervical cancer. In response to critics who noted that such a link was biologically implausible and that the study had ignored confounding factors, the authors replied that the study was justified simply on the grounds that it might reinforce the dangers-of-smoking message: "While we do not know of a biologic mechanism for either active ... smoking or ETS to be related to cervical cancer, we do know that cigarette smoking is harmful to health. The message to the public, as a result of this study, is one that reinforces the message that smoking is detrimental to health." (31) It would be difficult to find a more succinct example of the Lalonde Doctrine at work. There is no compelling evidence to support their claim, the authors all but admit, but it is important, in the interests of health promotion, that the public be made to think that there is scientific evidence of harm.
But second, while those involved in the EPA process are at one level open about the process, at another level they are profoundly dissembling. For instance, the EPA fails to mention that the "Group A" carcinogen status for ETS was arrived at using a process that violates its own Guidelines For Carcinogenic Risk Assessment. Instead of accepting that this suggested both a corrupted process and a corrupted conclusion, the Science Advisory Board suggested that the Guidelines For Carcinogenic Risk Assessment be changed. Given that the "right" conclusion must be reached and the data do not support that conclusion, one must manipulate the data and revise the guidelines governing the process and the conclusion.
Third, the ETS risk-assessment process has been corrupted from the outset by the fact that it has repeatedly violated the standards of objectivity required by legitimate science by utilising individuals with anti-smoking biases. One member of the group working on the ETS issue at the EPA is an active member of U.S. anti-smoking organisations, while the Science Advisory Board that examined the EPA's ETS work included not only a leading anti-smoking activist, but several others strongly opposed to tobacco use. Finally, the EPA contracted some of the work on certain documents related to the ETS risk assessment to one of the founders of a leading anti-smoking group.
Fourth, the EPA changed the accepted scientific standard with respect to confidence intervals, without offering any compelling justification, in order to make its substantive findings statistically significant.
Fifth, the EPA's Workplace Policy Guide which, as a policy document, would, in the course of normal scientific process, be developed only after the scientific evidence was in, was actually written before the scientific risk assessment was even completed, let alone reviewed and finalised. (32) Quite obviously, science was to be made to fit with policy, rather than policy with science.
Sixth, the EPA fails to note that if the two most recent U.S. ETS studies were to be included along with its eleven other studies, it would have resulted in a risk assessment that was not statistically significant, even using the novel 90% confidence interval. With its entire "conclusion" at risk, there are exceedingly compelling process reasons for the EPA to have excluded these two studies from their analysis.
Seventh, exclusion, however, was apparently insufficient, because the EPA does more than simply not use the studies, it actually refers to them in an appendix (33) and misrepresents one through claiming that it supports the EPA's ETS conclusions. The study, by Brownson, et al., (34) which appeared in November 1992 in the American Journal of Public Health reported no statistically significant increase in risk between lung cancer and ETS exposure. In order to get round this politically-unacceptable conclusion, the EPA quotes Brownson as concluding that "Ours and other recent studies suggest a small but consistent increased risk of lung cancer from passive smoking". But this is not the issue, as the EPA well knows. The question is not whether one observes a small increased risk but whether, in addition, one can legitimately assert that its magnitude is unlikely to be simply a chance effect, that is, whether there is a statistically significant risk, which in this case Brownson concluded one could not do. In effect, the EPA misrepresents a scientific finding through changing the terms of reference from statistical significance to just plain risk.
This penchant for misrepresentation is not, however, just confined to recent studies. For instance, the EPA analysis consistently makes reference to the Garfinkel, et al., study. The EPA claims that this study presents "at least suggestive evidence of an association between ETS and lung cancer ..." (35) But a careful reading of Garfinkel does not confirm this at all. Garfinkel actually says that "we found an elevated risk of lung cancer, ranging from 13-31 per cent, in women exposed to smoke of others, although the increase was not statistically significant." (36) The entire question of suggestive evidence is bogus: the relevant question is whether Garfinkel found a risk that was statistically significant. He did not, and the EPA misrepresents his finding.
Eighth, the EPA represents its process as a comprehensive and objective analysis of the ETS data. In the usual course of events, this would imply a careful examination of the criticisms that have been levelled at the studies used to reach its conclusions. However, a careful examination of the Bibliography accompanying its report suggests that this is not the case. Although the note with the Bibliography indicates that it is not a "comprehensive list of all references available on the topic" it is still a list of all references cited and reviewed for the report. Yet, to take but one example, one would never know from the report or its Bibliography that the work of Trichopoulos had been subjected to significant criticism by both Burch and Heller, since neither is mentioned in the Bibliography. Nor indeed, despite its lengthy examination of the Hirayama studies, would one learn from either the discussion or the Bibliography that Hirayama was devastatingly criticised by Rutsch, who noted that one could infer from Hirayama's data that lung cancer was more common in unmarried non-smoking women than in the non-smoking wives of smokers -- a somewhat curious result.
Now the possible explanations for such selectivity are that:
- the authors of the study are not familiar with such criticisms, which would suggest incompetence, or
- they are familiar with the criticisms but have misunderstood them, ignored them or discounted them.
But even if one were to discount or ignore them it is still odd, if one is committed to objectivity and openness, not to cite them. Not to cite them suggests that one wishes to act as if they didn't exist and to do this is to give rise to more than the suspicion that the EPA's ETS work is really an instance of a closed-loop process abuse. In a closed loop the circle is never opened up to divergent, dissenting views, views that challenge the orthodox conclusion. It is not simply that such divergent views are discounted; it is rather that, as the EPA discussion and their Bibliography indicate, they simply never are heard -- indeed, judging from the Bibliography, they don't exist. What this demonstrates is a closed-loop process in which voices of dissent are not allowed inside the decision-making circle. The circle is never opened to those who do not share the agreed perspective.
Ninth, significant difficulties have already been raised about the quality of EPA science, particularly by the Expert Panel in its report Safeguarding The Future: Credible Science, Credible Decisions, a report which noted that:
- EPA "science is of uneven quality";
- the "EPA has not clearly conveyed to those outside or even inside the Agency its desire and commitment to make high-quality science a priority";
- "the science advice function -- that is the process of ensuring that policy decisions are informed by clear understanding of relevant science -- is not well defined or coherently organised within EPA";
- the "Agency does not have a uniform process to ensure a minimum level of quality assurance and peer review for all the science developed in support of Agency decision making";
- the "Agency lacks the critical mass of externally recognised scientists needed to make EPA science generally credible to the wider scientific community";
- "science should never be adjusted to fit policy". (37)
Despite this, the EPA process is incapable of correcting itself.
This is perhaps the most significant process corruption of all, namely, a process that is quite conscious of its problems but is unwilling and unable to address them. Of course, even this characterisation is perhaps too kind given that what the Expert Panel describes as problems are really, for the anti-smoking movement, just the normal way that science must proceed if it is to make the anti-smoking case. If this is the case, then there is no conscious sense of process problems. What the Expert Panel's Report actually provides, of course, is another description of corrupted science corrupted in its substance and its process, science driven by a pre-determined policy agenda, science based on inadequate data, science of uneven quality and inadequately peer-reviewed, science lacking critical validation by outside scientists representative of the "wider scientific community", and science, finally, fully aware of its corruption, but unwilling to heal itself.
This is not to suggest that the NHMRC Report should exclude all reference to the EPA. It is, however, to suggest that it should acknowledge that serious objections have been raised against the EPA's report, that it should reference those objections and indicate its own position on such objections. To take but one instance, the Report could have noted that the EPA's departure from the accepted 95% confidence level is not accepted epidemiological practice, rather than merely repeating the EPA's self-serving explanation about an apriori hypothesis. The hypothesis is what should have been tested -- not assumed. Are we to assume that the Report's authors countenance as accepted epidemiological practice the re-calculation of the results using a lower significance level and the declaration that such a new result supports one's case? Not to acknowledge that objections have been raised and not to give them some attention, with such a fundamental document in the ETS debate as the EPA analysis is, is to appear to accept the process and substantive abuses that are the hallmark of corrupted science.
The third evidentiary problem is really a cluster of problems which relate to the Report's understanding and use of the epidemiological evidence. It is also a series of problems about what the Report chooses not to cite or count as relevant to its work. These problems have been ably demonstrated by Johnstone, (38) whose account here is fundamental. As he has pointed out, the problem essentially centres on the fact that the previous NHMRC Report (1986) (39) contained significant errors with respect to the quality of the epidemiological evidence, errors that go unacknowledged and uncorrected in the 1995 Report. To take one instance, the 1986 Report considered the work of Trichopoulos (40) but it provided the wrong calculations of risk ratios, even though these had been drawn to Trichopoulos' attention in 1983 by Heller and accepted by Trichopoulos in 1984. (41) It is astonishing that this error is not mentioned and that Heller's critique of Trichopoulos is not included in the Report's references. Equally strange is the fact that Trichopoulos' 1984 paper is not included in the Report's references. One has the uncomfortable feeling that the Report is not favourably disposed to acknowledging error in those who argue that the evidence suggests that ETS presents a health hazard. If this were an isolated instance of incorrectly-cited evidence that failed to take into account serious questions as to the evidence's validity, it could perhaps be excused. Instead, one is confronted with what appears to be a pattern of such problems.
Take, for instance, the work of Hirayama, author of some of the earliest and most widely-quoted studies. Hirayama's 1981 data were analysed using a test developed by Mantel. After the publication of Hirayama's paper, Mantel (42) published a criticism of Hirayama's method. In response to Mantel's criticism, Hirayama produced additional information which Rutsch (43) showed led to the conclusion that lung cancer was more common for non-smoking unmarried women than for non-smoking wives of smokers. Neither the NHMRC's 1986 Report nor the present Report mention these significant difficulties with Hirayama's work. Indeed, they do not cite these papers by Mantel or Rutsch.
Equally surprising is that the Report's references omit one of Hirayama's 1981 papers, (44) in which Hirayama acknowledged, in response to Lee, (45) that his confidence levels contained errors by factors of over 100%. Of course the Lee paper does not appear in the references either. Given that the Report devotes significant attention to Hirayama's work, it is nothing less than astonishing that none of these substantial criticisms of his work is mentioned or referenced. This is precisely what characterises corrupt science -- the unwillingness to acknowledge alternative evidence or problems with evidence that might undermine its credibility.
A similar reluctance to acknowledge evidentiary mistakes is found in the fact that the Report fails to acknowledge or correct errors in reporting the results of Correa, et al., (46) and Garfinkel, et al., (47) that were contained in the 1986 Report. The 1986 Report claimed that Correa found a positive trend in lung cancer in non-smokers with ETS. Yet Correa reported no such finding. The 1986 Report also claimed that Garfinkel found a "positive association in non-smoking females; statistically significant". But Garfinkel reports that the elevated risk was not statistically significant and the 1995 Report nowhere acknowledges its predecessor's glaring error.
The fourth evidentiary problem is the problem of what, if anything, the "evidence" adduced by the Report suggests. The Report's authors believe that the epidemiological studies "provide direct empirical evidence of the effect of exposures on the development of cancer". (48) But this is surely an impossibility as the language of effect is inextricably linked to the language of cause, and epidemiological evidence gives us at most statistical associations. But let us leave causal quibbles aside and focus solely on the question of what the evidence actually shows. Let us focus on the four largest and most recent case control studies -- Kabat, (49) Fontham, (50) Brownson, (51) and Stockwell, (52) three of which are discussed in the Report. The Report's discussion of Brownson and Stockwell is curious, if not misleading. The Brownson study shows no increased average risk. In plain language this means that using only the Brownson data one should conclude that ETS causes no annual lung cancer deaths. But this crucial point is not mentioned in the Report. Instead of being provided in the text with this information we are told that Brownson found "elevated risks for adult domestic exposure of greater than 40 pack-years". But the results of the upward dose-response trend are not definitive and, more crucially, even at the highest exposure levels, risks are subject to significant uncertainty.
A similarly curious presentation is accorded to Stockwell. The Report tells us that the Stockwell findings are statistically significant. (53) But this is simply untrue as the Stockwell study shows an increased average risk which, at the 95% level, is not statistically significant. What this means is that of the 40 studies presented in Table 5.2 of the Report only twelve reach statistical significance. Of the four most recent large studies -- Kabat, Fontham, Brownson and Stockwell -- two show no increased average risk (Kabat, Brownson), one shows a barely statistically significant increased risk (Fontham), and one shows an increased average risk which is nonetheless not statistically significant at the 95% confidence level (Stockwell). (54) Moreover, if we are to take the Fontham study, which alone of these four shows a statistically-significant average increased risk, and ask the relevant question about the degree of risk suggested, then in the words of the Congressional Research Service (CRS), the chance of dying of lung cancer over one's lifetime "for a person exposed only to background ETS, the number drops to about 7/100 of one percent". (55) It is on the strength of this "evidence" that the Report concludes that the "evidence that passive smoking causes cancer is strong". (56) But this is the conclusion of what can only be described as corrupted science, science that misrepresents the data, the strength of the data and what conclusions should be drawn from the data. Indeed, it is difficult not to conclude that if this "evidence" were about anything other than tobacco smoke, the suggestion that it provided the basis for public policy remedies would be immediately dismissed.
This conclusion is strengthened when two additional factors which the Report mentions are carefully examined: misclassification and the Hill criteria for causation. There are several sorts of misclassification errors that could occur in the studies cited in the Report. As the CRS noted: "It is clear that misclassification and recall bias plague ETS epidemiology studies. It is also clear from the simulations that modest, possible misclassification and recall bias rates can change the measured relative risk results, possibly in dramatic ways." (57) This is ignored in the Report's claim that "Smoker misclassification is a potential source of positive ... bias which has not been proven to have affected the relevant epidemiological studies". (58) But as the CRS notes:
[P]ossible combinations of small rates -- below 10 per cent -- could drive ETS relative risks in the highest exposure groups to values no longer distinct from 1.0. While these results are obtained from the Fontham study, similar results are likely from the Brownson study. Even smaller values of these rates -- below 3 per cent -- could be combined to reduce the lower bounds of the 95% confidence intervals well below 1.0 for these studies. (59)
An examination of the Report's use of the Hill criteria reveals a similarly curious result. Table 5.1 in the Report sets out the Hill criteria for causality and their application to the studies cited as evidence. What is so surprising about this table is its consistent misrepresentation of what the data in fact show. For example, with respect to strength of association, Hill argued that a strong association lent greater credence to a causal claim than a weaker, with a strong association being an RR of at least three. On this criterion, the Report's RR of 1.3 fails the association test. Similarly with dose-response, the data from Brownson and Stockwell suggest that the results of the upward dose-response trend are not definitive, and even at the highest exposure levels, risks are uncertain. As for coherence and consistency, it is difficult to understand how the evidence presented in the Report if critically examined, as opposed to the selective discussion of the Report, could lead an independent observer to conclude that, taken together, the epidemiological evidence consistently and coherently pointed to ETS as a source of lung cancer. Finally, the claim that the criterion of analogous reasoning supports causality, inasmuch as exposure to ETS is analogous to active smoking, is rejected even by the EPA. (60)
What this means is that there are really two central evidentiary issues on which the Report clearly founders: the issue of the methodology which supports the evidence, and the issue of the evidence itself. With respect to the methodology, I have argued that the epidemiology of multifactoral diseases does not permit the claims about causation that the Report indulges in and that the Report, far from being forthright about this limitation, ignores it. With respect to the evidence itself, I have argued that, leaving aside the question of whether epidemiology can provide us with causal knowledge about ETS, the evidence provided by the Report itself and taken on its own terms, does not support its conclusions. Using the evidence from the four most recent case-control studies shows two studies with no increased average risk, one study with a statistically increased risk and one study with an increased risk that is not statistically significant at the 95% confidence level. Taken together with the fact that only twelve of the 40 studies presented in Table 5.2 reach statistical significance, it is readily apparent that the evidence does not support the conclusion that the Hill criteria of coherence and consistency have been met.
The authors of the Report in part anticipate this line of objection for they note that "RRs which are considered relatively small in epidemiology ... can nevertheless be relatively large in demographic terms. Even a 10% excess in risk may translate into a considerable number of additional cases if the people carrying that risk are common in the community". (61) This reply, however, misses the point. The question about what the RR shows is really a question about what we can scientifically know, not a question about the quantity of risk across the population. Epidemiology is simply too crude a tool to allow us to claim that we know that something with an RR of 1.5 is a risk. In one sense the discussion about the effect on large populations is disingenuous, for it begs the question of whether there is any risk. The size of the population is completely irrelevant to the question of whether we have good evidence that there is a risk at all.
The second sort of problem which plagues the Report and suggests the label of corrupted science is its level of logical coherence and consistency. The question here is, what does the Report's evidence suggest be done as a matter of public policy? While we have argued previously that the Report's evidence will not sustain its findings, at this point we shall assume that the evidence does support it. We are concerned now not with the evidence itself, but whether the Report's policy recommendations cohere with its findings.
The major problem with the Report's recommendations is that they are strikingly at odds with its evidence. Indeed, it is not wrong to say that the evidence and the policy recommendations seem completely disconnected. At the very best the epidemiological evidence suggests an RR which is extremely weak, an RR for never-smoking women living with smokers of 1.31. (62) The evidence at best suggests a slight risk in domestic settings, yet the policy recommendations relate to public settings. This means that if any regulation of ETS is justified it would be justified on evidentiary grounds solely in homes, not in public places. Indeed, the one piece of evidence adduced by the Report, the LeVois and Layard meta-analysis, suggests a workplace ETS exposure and lung cancer RR of only 1.01. (63) Moreover, cotinine levels in non-smokers suggest that residential ETS exposure is probably more important than workplace exposure. (64) As the CRS concludes: "If, on average, workplace ETS exposure is lower than residential exposure, then it is likely that relatively few workers would be exposed to sufficient ETS to be at increased risk for lung cancer". (65) All of this contradicts the Report's claim that "the nature of the hazard ... is likely to be similar in all enclosed settings," (66) a claim for which no scientific evidence is offered.
Here then we come in a different way to see how the Report demonstrates the qualities of corrupted science. While previously the evidence itself was misrepresented and misjudged under the guise of objectivity, here the "evidence" is used to come to recommend a course of public policy that is completely incoherent. Not only does the Report provide no evidence that the alleged hazards of ETS are the same in all enclosed spaces, but the evidence about the hazards of the workplace that it does present suggests that the hazard is inconsequential. In the end we are left with a policy that leaves unregulated the area in which the Report's evidence suggests there may be hazard and regulates the area in which the Report's evidence suggests there is no hazard!
It is, of course, not difficult to understand why such incoherent policy rationale is advocated. The real justifications for the policy recommendations have nothing to do with the health threats to non-smokers but instead with two other factors: the "loss of amenity" which ETS causes to non-smokers and the "comprehensive strategy to control active smoking both in giving a clear message that non-smoking is normal behaviour and in protecting young people, in particular, from smoking exemplars". (67) The Report's authors are to be congratulated for being so forthright in their reasoning, however repugnant such reasoning might be to both good sense and democratic values. What emerges from these claims is a clear view of the process of corrupted science at work: that is, the claim that ETS threatens the health of non-smokers is only a crude attempt to use a harm-to-others argument to conceal an offensive and unjustified paternalism. It is not that ETS is harmful, rather it is that smoking is harmful to smokers and seeing smokers is potentially harmful to young persons.
In reality the science of ETS has nothing to do with the policy recommendations of the Report since these are not derived from even the inconclusive evidence that is produced. Controlling ETS is really a policy designed to force smokers to stop smoking and, further, to stigmatise smokers as morally unacceptable role models. It is a public policy that is morally reprehensible on at least two grounds: one, that it attempts to intervene, through manipulation of the facts, in the lives of competent adults; and two, that it characterises those who engage in a legally-permitted behaviour as inappropriate exemplars for young persons. While it is certainly open to the authors of the Report to hold these views about the "exemplar qualities" of smokers, it is not open to them to attempt to make these subjective moral judgements law in the guise of legitimate public policy.
2. THE CONSEQUENCES OF CORRUPTED SCIENCE FOR SOCIETY AND FOR DEMOCRATIC PUBLIC POLICY
In the first section of this paper I argued that there was substantial evidence that the Report is an instance of corrupted science. In this second section, I examine the consequences of using corrupted ETS science in order to frame public policy.
At the outset it is important to note that, apart from the direct implications for public policy, there is a wider social issue about the Report and its claims, namely, the moral questions which it raises.
The first sort of question is obviously the question of the legitimacy of misrepresentation, for corrupted science is at bottom science that misrepresents the state of reality. And what a careful analysis of the scientific claims of the Report reveals is a profound and systematic disregard for the truth about the possible dangers from ETS. Not only are data manipulated to produce the desired results and suppressed or dismissed when they do not fit with the standards of political correctness, but the fact that accepted standards are changed without justification goes unnoted. In effect, one has an ethic that legitimises misrepresentation in the service of a good cause -- "a smoke-free society". But is a smoke-free society a sufficient justification for a public health movement founded on unreliable science and blatant misrepresentation? The frightening thing about deceit -- whether in the allegedly righteous cause of eliminating smoking, or in the service of any number of other worthy ends -- is both that it is so easy to justify and so difficult to restrict its use to the ends that originally justified its employment.
But there is a second moral question here that goes beyond the morality of misrepresentation into what might be called the morality of suppressing dissent. Both the process of producing corrupted science and of utilising it as the basis for public policy demand a fundamental intolerance of dissent, both scientific and otherwise. The imperatives of health promotion are such that the ambiguities and uncertainties that form a legitimate part of science -- and, most importantly, the questions about the quality of the evidence and whether it justifies the proposed public policy measures -- cannot be tolerated. This means that scientific and public policy dissent must be suppressed through portraying dissenters as either in the pay of the tobacco industry or at the margins of the scientific establishment, a strategy that raises a host of subsidiary moral questions. Whatever the cost, "science" must be seen to provide a conclusive and united answer to the question of tobacco and harms to the innocent. Thus, despite the vital role of questions, argument and dissent in science as well as in democratic life, the process of corrupted science seeks to silence dissent in the interests of protecting not the truth, but its misrepresentation of the truth.
By far the most morally objectionable aspect of the Report is its readiness to use corrupted science to deprive smokers not only of their right to pursue their pleasure in public, but quite possibly to gain or retain their employment, or advance their prospects. Put more bluntly, it is the question of whether it is morally justifiable to use bad science to hurt people. What should never be lost sight of in this debate is that without the alleged scientific justification of harm to innocent parties, there is no compelling public policy rationale for banning or restricting smoking in public places or workplaces. Once the corrupted science is stripped away, there simply are no harms, and without those harms, smoking becomes a self-regarding behaviour, interventions against which can only be advanced on patently paternalistic grounds. The Report might still argue that public and workplace smoking should be banned in order to discourage smokers from smoking, but this argument loses its compelling harm-to-others character and becomes instead nothing more than an argument about the State intervening in the private lives of competent adults.
What is so morally offensive here is that truly morally blameless people -- not the alleged victims of smokers, but smokers themselves -- are to be harmed in significant ways on the basis of bogus science and for no good reason. What makes the morality of the Report as corrupt as its science is that it is prepared to exploit for its own ends our readiness to deprive individuals of certain rights, if the exercise of those rights appears to harm others, by explicitly manufacturing harms to others. In doing so, the Report simultaneously violates perhaps the two most fundamental moral principles, first by treating persons, in this case smokers and their alleged harms to others, as merely means to the end of a smoke-free society and not as ends in their own right, and second by inflicting substantial pain on an entire class of people without their consent and for no compelling reason.
But the question of the moral justifiability of using corrupted science to hurt people goes beyond the question of depriving individuals of their right to a significant pleasure, or even of a job, to something far more crucial, namely, the justifiability of depriving individuals of their moral standing through stigmatising them as moral outcasts. In the end, this is, of course, the logical outcome of ETS science, to make smokers a class of moral miscreants who see themselves, and are seen by others, as so ruthlessly intent on pursuing their own interests that they are blind to the harm they inflict on others. It is indeed but a short way from the claim "Smoking kills" to the conclusion that "Smokers kill". But then, such a conclusion is the public policy justification for bans on public smoking.
Surely there is nothing more morally loathsome than to attempt to manipulate public policy to create a class of citizens who, on the basis of bad science, come to despise themselves for what they mistakenly believe they do to others and who in turn are despised by their fellow citizens for allegedly threatening their well-being. The ultimate moral corruption of the Report lies in its considered use of corrupted science to erode the normal standards of respect and tolerance for individual difference. It does this through attempting to use science to fashion a society which believes that smokers are thoughtless menaces to the health of their fellow citizens, and that by their thoughtlessness they place themselves on the periphery of, if not outside, the moral community.
At the end of the day, the Report's foundation of corrupted science poses a threat to that most fragile aspect of social capital -- trust. To the extent that its evidence is shown to be contrived or disconnected from its conclusions, it lessens every thoughtful citizen's trust and respect for both science and government. To the extent that it misrepresents reality through creating a risk, it makes smokers distrustful of themselves and their actions while at the same time creating a distrust of smokers by non-smokers.
For all of the moral problems that the use of corrupted science raises, the equally, if not far more, disturbing issue that it introduces is the implications of bad science in the public policy process. What then are the consequences of introducing corrupted science into the democratic public policy process? In a word, nothing less than disastrous.
The operative word here is, of course, democratic. We are not concerned with the uses of bad science in a non-democratic society. In fact, with the appropriate degree of space we might wish to argue that non-democratic societies, or more specifically totalitarian societies, might be peculiarly receptive to the use of corrupted science. Our concern here, however, is with the effects of using corrupted science as a mechanism for framing and justifying democratic public policy. Our claim is that the effects of using such science are fundamentally at odds with the character of a democratic society.
The easiest way to understand the threat that corrupted science poses to democratic public policy and to democratic life as a whole is to understand what it is that democratic public policy tries to do. The goal of democratic public policy is to minimise public harms insofar as this is possible within the context of such foundational democratic values as diversity, autonomy, respect, rationality and fairness.
THE VALUE OF DIVERSITY
This is the recognition that the persons who make up democratic society bring a diversity of beliefs and values to the community, a diversity whose rich complexity is, more often than not, not captured by the social science theories and data that are the tools of choice in the public policy process. This diversity, moreover, is not simply reflected in conflicting notions of what direction society should take, but more basically in differing pictures of what makes a good personal life. By accepting diversity as a foundational value, the democratic society and democratic public policy process accepts it not just as a fact but as something to be encouraged, enhanced and celebrated as a strength.
THE VALUE OF AUTONOMY
This is the recognition that, subject to the acceptance of certain minimal core values necessary for any society to exist, the individuals who make up democratic society are the best judges of the shape that they wish their lives to take and they should be accorded the maximum liberty, compatible with similar liberty for everyone else, to think, believe, and live as they choose.
This means that the State will resist the impulse, however well-intentioned, through the misuse of the public policy process, to undermine and intrude upon its citizens' capacities and inclinations for self-governance, to engineer the lives of its citizens through collectivising the conscience into one communal vision of the good life.
THE VALUE OF RESPECT
This is the recognition of the equality of the human and moral standing with the State that the citizens of a democratic society possess. It is the recognition that the democratic state sees its citizens as persons of intrinsic worth, equivalent in dignity and standing with itself, with lives not to be managed or saved, but to be allowed to develop in ways of their own choosing. It is a recognition that the State's role should be to encourage its citizens to define themselves and their life-projects in widely varying ways, to foster the development of self-respect through deferring, to the greatest extent possible, from moral judgements about these self-definitions and life-projects and to create the conditions which allow its citizens' lives the greatest possible chance of fulfilment.
THE VALUE OF RATIONALITY
This is the recognition that the public policy process must be grounded on democracy's respect for the rational, that is, adherence to the standards of rationality with respect to the assumptions that it employs, the evidence that it considers and the analysis that it brings to bear on that evidence. None of its elements must be based on irrational considerations: all must meet the minimal test of reasonableness through being clear, coherent, and compelling. The evidence supporting public policy measures must be substantial. The measures must be coherent and consistent with what the evidence shows and the measures proposed must have a significant promise of being effective. At the same time the recognition of the value of rationality is also the recognition that the truth is frequently complex, something that places its own limits on the scope of rationality. Reality will often be richer and denser than evidence and theories, some problems will not have easily identifiable causes and solutions, but this does not justify public policy formulated in the absence of reason and on the basis of surmises, hunches or appeals to emotion or intuition.
THE VALUE OF FAIRNESS
This is the recognition that democracy entails a foundational commitment to elicit, examine and consider, within an objective, open and non-arbitrary framework, the views of all of those whose legitimate interests are likely to be affected by collective decision-making. It is the recognition that a democratic society will not unfairly discriminate through structuring the power process so as to preclude the statement and examination of certain perspectives.
***
What this suggests is that both the agenda of legitimate public policy in a democracy and the process used to argue about that agenda are constrained by certain non-negotiable values. What marks certain policy options and certain policy processes out as illegitimate and non-democratic is their conflict with these core non-negotiable values.
To take an example, a public policy that significantly undermined the autonomy routinely accorded to citizens in a democratic society, or one that tended to eliminate the diversity and tolerance that are marks of democratic life, or a public policy process that failed to consider fairly divergent points of view, or used poor argument or flawed evidence to advance policy options would fail to qualify as legitimate democratic public policy inasmuch as it conflicted with one or more of the key values on which not only democratic public policy but democratic society is founded.
Placed within this context it is clear that the Report's use of corrupted science is a threat, not at some peripheral point, but at the very centre, to democratic values and to democratic public policy. Corrupted science and the use of corrupted science in the creation of policy threatens each of the process and substance values -- diversity, autonomy, respect, rationality and fairness -- that characterise democratic public policy. And this is something that should be of concern to everyone, whether non-smoker or smoker.
First, the use of bogus ETS science in an attempt to determine the policy agenda on smoking imperils the distinguishing characteristic of science -- its objectivity -- and threatens to render science worthless for public policy purposes. Though science is never completely objective, if indeed complete objectivity is possible, it at least, in distinction from much of the political process, professes a fundamental interest in reason, evidence and bias-free judgement. In fact, much of science's standing in contemporary society derives from its objective character, as does much of its usefulness in the public policy process. In effect, we have a high degree of confidence in the scientific process as providing a careful, evidenced and (to some degree) value-free, assessment of certain questions relating to public policy. And it is precisely this utility that the use of corrupted science threatens. If science ceases to work outside of the political and policy process, if it ceases to be a tool available to all sides of an issue, if it becomes politicised and ideologically sensitive, then it ceases to be valuable in the policy process because it becomes nothing more than another special pleading rather than the voice of reason.
In this sense, to use corrupted science, for however allegedly worthy an end, is inevitably to corrupt science itself. No one who genuinely cares about good public policy, policy crafted on the basis of careful argument, cogent reasoning, and compelling data, policy that can stand the test of careful probing and consistent dissent, will countenance the corruption of science.
But the use of bogus science to attempt to manipulate the public policy debate on smoking threatens not just science, but also the standards of rationality that distinguish legitimate public policy. Adherence to the norms of rationality requires that the identification of problems, causes and solutions be based on empirical evidence of the most rigorous sort, evidence that is specific, strong, consistent, and coherent, and on rational arguments that are clear and logically compelling. Problems and solutions that cannot meet this standard of argument are not allowed a place in the public policy process. To do so is to abandon commitment to reason as a foundational democratic value. Yet the use of corrupted ETS science as a basis of public policy is nothing less than an abandonment of rationality as a measure of legitimate public policy. As we noted above, the Report's ETS "science" cannot meet any of the tests of rationality that determine legitimate public policy problems and solutions. The ETS "evidence" is not specific, strong, consistent, or coherent, and if it fails these tests, it cannot provide compelling rational reasons -- as opposed to rhetorical and emotional reasons -- for its public policy recommendations. The use of corrupted ETS science is, however, more than simply an abandonment of reason in the public policy process, it is also something far more frightening: the attempt to institutionalise a particular irrational view of the world as the only legitimate perspective; to replace rationality with dogma as the legitimate basis of public policy. If the use of corrupted ETS science by the Report represented simply the abandonment of reason, then their actions would be simply non-rational. But the Report's efforts go beyond the non-rational to the irrational, to an assault on reason itself. By refusing to include evidence of scientific dissent from the officially determined "truth" about ETS, as evidenced in the omission from key bibliographies of any references to criticisms of key findings and studies, by manipulating and misreporting data, the Report's authors tend to appear as enemies of the open and self-correcting process of reason. In a very real sense, the "truth" about ETS ceases to be open to rational assessment and assumes instead the status of revealed dogma. And only those who ultimately fear, if not loathe, reason are comfortable with dogma as the basis of public policy.
There is, however, a third peril that the use of corrupted ETS science poses to democratic public policy, and that is through its treatment of the question of risk. The question of risk is central to any modern discussion of harm and public policy. If everything depends on science, then everything depends on the science of risk assessment. And this places a special moral burden on those who use the notion of risk and risk assessment in public policy debates to be certain that the concept is used with integrity and not simply as a lever to frighten. In one sense the misuse of the notion of risk is simply another instance of a fundamental contempt for reason in public life because it is an attempt to gain, through irrational means, something that careful argument denies one.
To use the notion of risk with integrity in public policy discussions would minimally involve:
- stating risk assessments in a way that does not exaggerate harms and allows for individuals to make their own decisions about balancing the risks and rewards of various courses of action;
- placing particular risk assessments within a general risk context in a way that allows one to answer the question of how significant is this risk compared with other risks associated with everyday living (the question thus becomes not simply "is this activity risky", but "does it carry a risk level that in other circumstances we would consider worrisome"?); and
- conveying the full sense of both the inexactness of risk assessment and the complexity of risk assessment -- even in the face of the popular preference for simplicity.
(As C.P. Snow observed about complexity and ambiguity: "Even at the highest level of decision, men do not really relish the complexity of brute reality, and they will hare after a simple concept whenever one shows its head".)
Given the imperatives of the Lalonde Doctrine and the general aversion of the anti-smoking movement to individual autonomy, it is obvious that these standards of responsible risk discussion will be ignored in the ETS debate. Indeed, given the flimsy nature of the ETS "science", the last thing that the Report wishes to have is a careful public policy discussion of the real risks that ETS exposure poses. Everything hinges on the public thinking that, for example, the non-smoking wives of smoking men have a 30 per cent increase in their risk of getting lung cancer without stopping to think what this figure, even if it were true, might mean. As Peter Finch explains, what the figure really means is something quite different, once the risk involved is explained and contextualised, from what the individuals using the figure want the public to think that it means:
The annual death rate from lung cancer among non-smoking wives of non-smoking men is of the order of six per 100,000. Among non-smoking wives of smoking men the corresponding figure is eight per 100,000. Thus, two in every 99,994 non-smoking wives of smoking husbands die of lung cancer that, it is claimed, should be attributed to the effects of passive smoking. This is an exposure risk of almost one in 50,000, about the chance of tossing 16 heads in a row. To put such a small exposure risk into perspective note that, in Australia, the death rate from injuries and poisonings for males aged 15-24 years, at 101 per 100,000 is about 50 times as great ... by emphasising a 30 per cent increase in lung cancer due to passive smoking, health activists have led people to think that this is a high risk situation when that is not the case. (68)
Simple statements of risk, even assuming that the risk assessment itself is reliable, can be significantly misleading. Suppose, for instance, that I wish to promote the use of a certain heart drug. As a result, I claim that the drug reduces the death rate among patients from 4 out of every 100 patients to 3 out of every 100, something that means that the drug has a risk reduction factor of 25 per cent. What I fail to tell you is that the absolute risk difference is only one percentage point and that I would have to treat 100 patients to save just one extra life. Although there is some truth to my original claim of a 25 per cent risk reduction, my reliance on this figure alone and my failure to place this reduction in context actually significantly distorts reality.
In effect, the Report's use of corrupted ETS science corrupts the entire discussion of risk in public policy through failing to note the inexactness of the assessment, through failing to contextualise the risk and compare it with other accepted forms of risk in everyday life, and through failing to acknowledge the complexity of the entire issue.
The fourth peril that the use of corrupted ETS science presents for democratic public policy is that it undermines the value of fairness that is so central to democratic life. Fairness is undermined in at least two crucial senses. First, the debate about the nature of the evidence, its complexity and its contentiousness is never fairly acknowledged. The existence of significant dissent is rarely admitted and truth is made to appear simple, easy and unambiguously pointing in one policy direction, when in reality, as we have observed above, none of this is the case. Second, the fundamental requirements of fairness -- the commitment to elicit, examine and consider, within an objective, open and non-arbitrary framework all views, without using the power of authority to suppress and exclude -- these requirements are flouted both in letter and in spirit by the corrupted ETS science's substance and process. Indeed, perhaps the defining character of corrupted ETS science is its fundamental non-objectivity, its process unfairness.
The fifth peril that corrupted ETS science presents for democratic public policy is the existence of an official, State-sanctioned scientific ideology that is used to morally stigmatise, morally degrade, morally exclude certain citizens from the civil community. For that is the ultimate public policy purpose of ETS "science", to provide a compelling public policy justification for marking out certain behaviour as morally unacceptable. The quite horrible consequences of state-sanctioned science that prescribed certain acceptable ways of living and thinking and which singled out certain individuals as moral reprobates in our own century should provide warning enough against the dangers of giving public policy legitimacy to a "scientific" dogma that morally degrades and excludes.
Finally, and perhaps most significantly, the use of corrupted ETS science in the public policy process threatens the central democratic values of autonomy, respect and diversity. The key to these values is the belief that individuals are equal in moral standing with the State, that they are the best judges of the shape of their own lives and should be encouraged to develop genuine diversity, and finally that they are also capable of understanding and participating in the life of the community, including choosing those who are to exercise authority. But inasmuch as ETS science is based on a selective and ultimately untrue reading of reality, and given that it is fundamentally designed to manipulate both individuals and the policy process into believing and doing certain things, it is impossible for such science to value autonomy, respect or diversity. The very nature of corrupted science is such as to deny that individuals can, or at least can be trusted to, make important and informed decisions about themselves, and most especially about their health. The agenda of corrupted science is at its core based on the paternalistic assumption that only a few can think and act correctly, that only a few know the "truth", a truth that must be carefully nuanced for the many, that only a few must ultimately chart one moral or healthy or rational way to live, a way that the State will in turn enforce. In the end, perhaps paradoxically, corrupted ETS science goes wrong at the very start, for when it decides that the interests of health promotion and the interests of a smoke-free society take precedence over the truth, it sets itself on a course of manipulation, fabrication and misrepresentation that cannot but collide with the values of autonomy, diversity and respect.
In the final analysis, it is difficult indeed to find anything substantial in the Report which would tend to mitigate the conclusion that it is an instance of flawed science and flawed public policy produced by authors who fail to appreciate the difference between the objectivity of scientific analysis and advice, and the rhetoric and partisanship of advocacy. Consequently its only usefulness in the public policy process is as an example of how not to produce legitimate, democratic public policy.
ENDNOTES
1. This is a revised version of a paper submitted to the NHMRC and commissioned by the Tobacco Institute of Australia.
2. John Last, "New pathways in an age of ecological and ethical concerns", International Journal of Epidemiology, 1994, 23, pages 1-4.
3. "A New Perspective on the Health of Canadians", Health and Welfare Canada, 1974.
4. NHMRC, The Health Effects of Passive Smoking, The Draft Report of the NHMRC Working Party, Canberra, 1995. Hereafter cited as the Report.
5. "Risk Ratio" (or "Relative Risk") measures the prevalence of exposure to a risk among a group of people who have a particular disease -- referred to as the cases -- and a group of people who do not have the disease -- referred to as the controls. Risk Ratio is a statistical representation of the risk arrived at by dividing the prevalence of exposure among the cases by the prevalence of exposure among the controls. As Steven Milloy notes: "Let's say you've studied the association between high-fat diets and lung cancer. You've calculated a relative risk of 6. The correct interpretation of this relative risk is that the incidence of high-fat diets in the study population was six times greater among those persons with lung cancer than those without lung cancer". Steven Milloy, Science Without Sense, Cato Institute, Washington DC, 1995, page 12.
6. NHMRC, Report, frontispiece, page unnumbered.
7. For example, NHMRC, Report, pages 77, 101, 105.
8. Ibid., page 105.
9. J. McCormick and P. Skrabanek, Follies and Fallacies in Medicine, Tarragon Press, Glasgow, 1989, pages 90-91.
10. See A. Weinberg, "Science and Trans-Science", Minerva, 10, 1972, pages 209-222.
11. Ken Rothman, Modern Epidemiology, Little Brown, Boston, 1986, page 11. (Emphasis added.)
12. See NHMRC, Report, page 5.
13. Loc. cit.
14. The NHMRC's nine criteria for causation, drawn from Hill's textbook, Principles of Medical Statistics, are: Strength of association; Dose-Response; Consistency; Specificity; Temporal relationship; Biological plausibility; Experimental evidence; Reasoning by analogy; and Coherence of evidence. See NHMRC, Report, page 88.
15. NHMRC, Report, page 5.
16. P. Skrabanek, "Risk-Factor Epidemiology: Science or Non-Science?" in Health, Lifestyle and Environment, Manhattan Institute, New York, 1991, pages 47-48.
17. R. Doll and R. Peto, The Causes of Cancer, Oxford University Press, New York, 1981, page 1218.
18. Gary Taubes, "Epidemiology Faces Its Limits", Science, Vol. 269, 14 July 1995, page 165.
19. Ibid., page 168.
20. Loc. cit.
21. G. Gori, "Epidemiology, Risk Assessment, and Public Policy: Restoring Epistemic Warrants" in Risk Analysis, in press.
22. In risk assessment language, "conservative" refers to the assumption that 1) there is probably more risk than the data demonstrate and 2) one should act as if any risk constitutes an unacceptable threat. A "default option" is any assumption that bridges a gap in the scientific data or knowledge about any risk which cannot be proved or disproved by science. For further detail, see A Blueprint for Constructing a Credible Environmental Risk Assessment Policy in the 104th Congress, 1994, The Institute for Regulatory Policy, Washington DC; and K. Harrison and G. Hogberg, Risk, Science and Politics, McGill-Queen's University Press, Montreal, 1994.
23. "Dose-response linearity" means that the relationship between the dose and the response is proportionate such that a change in one -- the dose -- brings about a proportionate change in the other -- the response.
24. NHMRC, Report, page 98.
25. See R. Kluger, Ashes to Ashes: America's Hundred-Year Cigarette War, the Public Health and the Unabashed Triumph of Philip Morris, Knopf, New York, 1996, pages 690-699 and 737-740.
26. Office of Health and Environmental Assessment, Office of Research and Development, US Environmental Protection Agency, Respiratory Health Effects of Passive Smoking: Lung Cancer and Other Disorders, Washington DC, December 1992, page 1-1.
27. See, for example, ibid., pages 1-4 and 1-8.
28. G. Huber, R. Brockie and V. Mahajan, Consumers Research in the United States, 1991.
29. Science, 31 July 1992, page 607.
30. The Science Advisory Board within the EPA involves outside experts who provide peer review for Agency research. The less-than-objective nature of the SAB with respect to the EPA's ETS report is outlined by M. Perske, "The Politicized Science of Tobacco Policy", Regulation, 3, 1995, pages 11-15. Perske's account reproduces verbatim discussions from the SAB about the relationship between ETS and active smoking.
31. Journal of the American Medical Association, 29 July 1989, page 499.
32. See Kluger, op. cit., page 692.
33. See EPA, op. cit., Appendix A, Addendum.
34. R. Brownson, M. Alavanja, et al., "Passive smoking and lung cancer in nonsmoking women", American Journal of Public Health, 82, 1992, pages 1525-1530.
35. EPA, op. cit., page 5-48.
36. L. Garfinkel, et al., "Involuntary smoking and lung cancer: a case-control study," Journal of the National Cancer Institute, 75, 1985.
37. Safeguarding The Future: Credible Science, Credible Decisions, EPA, Washington DC, 1991, page 17.
38. Johnstone, J.R., Health Scare: The Misuse of Science in Public Health Policy, AIPP, Perth, 1991, pages 7-13.
39. NHMRC, Report of the working party on the effects of passive smoking on health, 1986.
40. D. Trichopoulos, A. Kalondidi, et al., "Lung cancer and passive smoking: conclusion of Greek study", Lancet, ii, 1983, pages 677-678.
41. W. Heller, "Lung cancer and passive smoking", Lancet, ii, 1983, page 1309; D. Trichopoulos, "Passive smoking and lung cancer", Lancet, March 1984, page 684.
42. N. Mantel, "Nonsmoking wives of heavy smokers have a higher risk of lung cancer", British Medical Journal, 282, 1981, pages 914-915.
43. M. Rutsch, "Nonsmoking wives of heavy smokers have a higher risk of lung cancer", British Medical Journal, 282, 1981, page 985.
44. T. Hirayama, "Nonsmoking wives of heavy smokers have a higher risk of lung cancer", British Medical Journal, 282, 1981, pages 916-917.
45. P. Lee, "Nonsmoking wives of heavy smokers have a higher risk of lung cancer", British Medical Journal, 283, 1981, pages 1465-1466.
46. P. Correa, et al., "Passive smoking and lung cancer", Lancet, 2, 1983, pages 595-597.
47. Garfinkel, op. cit.
48. NHMRC, Report, page 78.
49. G. Kabat, et al., American Journal of Epidemiology, 142, No. 2, 1995, pages 141-148.
50. E. Fontham, et al., Journal of the American Medical Association, 271, No. 22, 1994, pages 1752-1759.
51. Brownson, op. cit.
52. H. Stockwell, et al., Journal of the National Cancer Institute, 84, No. 18, 1992, pages 1417-1422.
53. NHMRC, Report, page 94.
54. See C. Redhead and R. Rowberg, Environmental Tobacco Smoke and Lung Cancer Risk, Congressional Research Service, 1995, page 2.
55. Loc. cit.
56. NHMRC, Report, page 105.
57. Redhead and Rowberg, op. cit., page 45.
58. NHMRC, Report, page 19.
59. Redhead and Rowberg, op. cit., page 40.
60. EPA, op. cit., page 6-6.
61. NHMRC, Report, pages 99-100.
62. Ibid., page 101.
63. See NHMRC, Report, page 97.
64. Cotinine is the most commonly used ETS biomarker. Cotinine is the metabolite of nicotine inside the body.
65. Redhead and Rowberg, op. cit., page 4.
66. NHMRC, Report, page ix.
67. NHMRC, Report, page 209.
68. P.D. Finch, "Creative Statistics", Health, Lifestyle and Environment, The Social Affairs Unit and the Manhattan Institute, New York, 1991, pages 80-81.