December 23, 2018

Critique of onscreen smoking harm: Less than mets the eye

On November 5, 2018, Christopher Ferguson and colleagues published ‘Movie Smoking and Teen Smoking Behavior: A Critical Methodological and Meta-Analytic Review” in Psychology of Popular Media Culture. The paper argues that the conclusions of the US Surgeon General and National Cancer Institute, accepted by the World Health Organization, that exposure to onscreen smoking causes kids to smoke is wrong.  Ferguson subsequently publicized this paper in an op-ed in the New York Daily News.

Before getting into the details of Ferguson et al’s analysis, it is important to note that — in the end — they found a statistically significant increase in the odds of smoking among youth exposed to smoking in movies, with an odds ratio of 1.37 (95% CI 1.19, 1.56). This ratio is a little lower than the US Surgeon General estimate for longitudinal studies of youth initiation (Figure 5.12 of the 2012 Surgeon General report), which was 1.76 (95% CI 1.31, 2.37). Indeed, the 95% confidence intervals overlap: Ferguson et al’s assessment of the magnitude of the effect of onscreen smoking is not all that different from the Surgeon General’s 2012 estimate. 

Nevertheless, Ferguson et al dismiss the statistically significant elevation in risk found in their analysis as “trivial.”  The authors are free to make that value judgement, but my guess is that most parents and public health officials would consider the 37 percent increase in the risk of a kid starting to smoke to be a problem.

Unless you agree with Ferguson that his estimated 37 percent elevated risk in youth smoking isn’t worth worrying about, this paper really doesn’t affect the debate over smoking in movies.

Some other comments on the paper

Ferguson et al devote substantial attention to quibbling with the word “cause” and argue that epidemiology can only yield “correlations.” At the same time, the authors of this paper acknowledge that it would be unethical to conduct experiments in which youth were randomized to see different amounts of smoking onscreen to see if this exposure affected their smoking. Ferguson et al acknowledge that longitudinal studies are the best you can do to address this question.

They also acknowledge that the “Dartmouth method” — asking kids what movies they have seen, then measuring how much smoking is in those films — is the best method out there.  However, they criticize all of Dartmouth's studies because of possible response bias, in which smokers are more likely to remember movies with smoking.  They missed the fact that the Dartmouth method explicitly tests for response accuracy by including a made-up movie title to see how many kids recall seeing it; they (and others who use the Dartmouth method) find very low numbers of kids (a few percent) reporting having seen the nonexistent movie.

Ferguson et al found that studies that used the Dartmouth method found higher risks than the other studies. They implied that this higher risk estimate could reflect a bias by the investigators.  A more reasonable explanation for the higher risk estimate is that the Dartmouth method yields a better measure of exposure than older studies, which increases the power of the study to detect an effect.  Indeed, that is why the Dartmouth method has been so widely adopted.

Ferguson et al recognized that the studies they included in their analysis "uniformly made attempts to control for reasonable third variables," a good thing.

They also checked for publication bias in the data because one always has to worry that there are a lot of negative studies out there that simply never got published, so the meta- analysis overestimated the risk.  They didn’t find any evidence of publication bias.

Ferguson et al ignored the fact that the Surgeon General and other health authorities do not rely only on statistical associations when drawing causal conclusions.  They also consider related information, such as short term-experiments and focus groups.  (The Surgeon General’s 2014 report has an entire chapter on the rigorous standards for reaching a conclusion of causality.)  In addition, the Surgeon General reports involve hundreds of people and a peer review process that goes far beyond what that an individual paper receives from a journal.

Ferguson et al also severely limit which studies are included, which reduces the power of the analysis to detect an effect.  In addition, they completed their search in May 2017 even though the paper was not submitted until April 2018.  Several more studies on smoking were published during that time that were not even considered. 

In contrast, the Surgeon General considered the entire available literature at the time the 2012 report was being prepared (taking care to to only count studies once),  including the cross-sectional studies and longitudinal studies of current or established smoking (in addition to smoking initiation), the Surgeon General found that exposure to onscreen smoking nearly doubled the odds of youth smoking (OR 1.93; 95% CI 1.64, 2.27).  The evidence base has grown since then, consistently showing elevated risks of smoking among kids who see smoking onscreen.

Finally, Ferguson et al even tested “whether articles appeared to endorse government regulation or censorship or other public policy efforts likely to reduce speech” as a test of investigator bias.  They didn’t find any such evidence.  Could it be that Ferguson et al's dismissal of the statistically significant increase in risk they found as “trivial” reflects a bias on their part?