By Ajay K. Singh, MBBS, FRCP, MBA
August 24, 2017
The use of the term “fake news” by President Trump and others raises the issue of whether medical news about disease and/or its treatment can be faked. Robert McNutt, in a provocative article in the Health Care Blog, asks the question: How Can I Tell if Medical News is Fake or Not?
Dr. McNutt recommends asking three questions in evaluating medical news:
- Is the item being reported measurable?
- What additional human traits or actions may cloud or confound the relationship between the item being studied and the outcome being touted?
- How was the study done?
McNutt uses the example of the purported health benefits of coffee. In a blog piece about one year ago, one of my Harvard colleagues, Dr. Sanjiv Chopra, wrote the following about the benefits of coffee drinking:
“The facts are indisputable; coffee appears to offer a great variety of benefits, including substantial protection against liver cirrhosis, type 2 diabetes, heart disease, Parkinson’s disease, cognitive decline and dementia, gall stones, tooth decay and a host of common cancers, including prostate, colon, endometrial, and skin cancer. There also is a lower rate of suicide among coffee drinkers.”
Since I am not an expert on this, I will refrain from opining on the merits of coffee drinking, although I am very skeptical that the facts are “indisputable.” In McNutt’s example, he interrogates the benefit of coffee drinking using the three stated questions. More generally, he then states:
“Observational comparison studies, rather than randomized studies, are nearly always fake, as observational studies cannot prove an independent contribution of the item being studied to the outcome of interest. In other words, if they happen to be true, we can’t prove it. Hence, they are fake.”
Of course, Dr. McNutt must realize that his statement goes too far. While confounding is an important issue in any association study, it is manifestly wrong to state that results from these studies are fake. Observational studies have limitations, but so do randomized trials.
Observational studies do not prove causation, but they can provide valuable data that, when examined by sophisticated statistical methods, might mimic randomized trials. Take the example of postmenopausal hormone therapy and coronary heart disease. Hernan and colleagues conceptualize observational data as a sequence of non-randomized trials to demonstrate the ability to arrive at conclusions that mimic those from a randomized trial on postmenopausal hormone therapy.
The early epidemiologic science around the association between smoking and lung cancer underscores the powerful impact that observational data can have in reducing the burden of disease and saving lives. Writing in Tobacco Control, Robert Proctor states:
“Scholars started noting the parallel rise in cigarette consumption and lung cancer, and by the 1930s had begun to investigate this relationship using the methods of case-control epidemiology. Franz Hermann Müller at Cologne Hospital in 1939 published the first such study, comparing 86 lung cancer ‘cases’ and a similar number of cancer-free controls. Müller was able to show that people with lung cancer were far more likely than non-cancer controls to have smoked, a fact confirmed by Eberhard Schairer and Eric Schöniger at the University of Jena in an even more ambitious study from 1943. These German results were subsequently verified and amplified by UK and American scholars: in 1950 alone, five separate epidemiological studies were published, including papers by Ernst Wynder and Evarts Graham in the USA and Richard Doll and A Bradford Hill in England. All confirmed this growing suspicion, that smokers of cigarettes were far more likely to contract lung cancer than non-smokers. Further confirmation came shortly thereafter from a series of prospective ‘cohort’ studies, conducted to eliminate the possibility of recall bias. The theory here was that by following two separate and initially healthy groups over time, one smoking and one non-smoking, matched by age, sex, occupation and other relevant traits, you could find out whether smoking was a factor in the genesis of lung disease. The results were unequivocal: Doll and Hill in 1954 concluded that smokers of 35 or more cigarettes per day increased their odds of dying from lung cancer by a factor of 40. Hammond and Horn, working with the American Cancer Society on another large cohort study, concluded that same year that the link had been proven ‘beyond a reasonable doubt’.”
And, there are many other examples in the arena of public health and delivery science.
Dr. McNutt’s summary condemnation of the value of epidemiologic research, however well meaning, demonstrates a fundamental lack of understanding of the value of this science. Indeed, epidemiologic studies done well not only provide the foundation to develop hypotheses that can be tested in randomized trials, but have in themselves had tremendous impact on public health.
Dr. Ajay K. Singh is the Senior Associate Dean for Global and Continuing Education and Director, Master in Medical Sciences in Clinical Investigation (MMSCI) Program at Harvard Medical School. He is also Director, Continuing Medical Education, Department of Medicine and Renal Division at Brigham and Women’s Hospital in Boston.
Dr. Singh teaches the online CME course: Developing Essential Skills in Clinical Research