What journalists are missing when covering cancer research
Reporting on cancer research can be intimidating. So many studies are published daily about dozens of different cancers, hundreds of treatments and thousands of potential carcinogens or other environmental factors.
One challenge is reporting accurately on these studies while including appropriate context of existing research, since a single paper usually addresses one question. But before that challenge, journalists have to decide what studies to report on in the first place. A November 2020 study in PLOS ONE looked at research covered by four outlets in the U.S., U.K. and Australia and identified several areas that merit improvement.
Why does this matter? As the study’s authors wrote, “Poor reporting may hinder informed decision-making about modifiable risks and treatment selection, generate false or unmet expectations and undermine trust in science.” Novelty and effect sizes seemed to drive the selection of studies journalists covered, and basic research studies were particularly susceptible to being sensationalized.
”While experienced scientists and many journalists likely know to view these papers as potentially useful pieces in a much greater puzzle, the general population may not have the experience or specialist knowledge to interpret individual reports critically in a broader context,” the authors wrote.
But they acknowledge that this isn’t a problem perpetuated only by journalists since research articles themselves also include hype, as I’ve previously reported here.
Still, if we want our reporting on research to be credible and meaningful, it needs to be high quality. Here are a few key takeaways from the results:
- Men were over-represented as senior authors of the studies and especially as outside experts. It’s important to include more female-authored research and more female independent experts.
- Few of the articles noted the limitations of studies or the funding source or conflicts of interest, all of which need to be prominently included in news reports about a study.
- Most of the coverage was of single studies — clinical trials or observational research — rather than meta-analyses or systematic reviews, which provide a broader, more reliable window into the evidence on a topic.
- Nearly half the studies covered were not peer-reviewed, which raises concerns about their reliability (though this may be offset by independent expert quotes).
- When reporting on single studies, be sure to include adequate context to help readers make sense of the findings since a single study’s results rarely hold up long-term on their own.
- “Both journalists and scientists should also take care to mention the limitations and caveats of novel ideas in research and be mindful of accurately conveying uncertainty,” authors of the PLOS ONE study wrote.
What the study involved
The research analyzed the “distribution of study types, research sources, reporting quality, gender bias, and national bias in online news reports” from March to September 2017 in the U.K. edition of The Guardian, The New York Times, The Sydney Morning Herald, and the Australian Broadcasting Corporation. The 80 articles analyzed were all about one specific study and excluded syndicated articles. The articles analyzed — 20 from each outlet — are in this Excel file.
The researchers determined whether the study being covered was basic research (animal studies and other preclinical research), clinical research (typically randomized controlled trials), epidemiological research (primarily observational studies), or a meta-analysis or systematic review (generally the most reliable study on the evidence pyramid). The quality of the news report was then assessed based on whether:
- The source was peer-reviewed
- Conflicts of interest and/or funding sources were identified
- Independent experts were quoted
- The story contained a direct link to the study
- The story contained enough information about the study that a reader could find the source on their own
- The story noted limitations of the study
- The story included broader research context
- The story quantified absolute risks or benefits
- The headline was misleading
- The headline and the body of the story accurately reflected the study’s main purpose, outcomes and implications
- Both the headline and story avoided overgeneralizing the findings (such as implying the results applied to a broader group of people — or people at all if it was an animal study — than the study allowed for).
Researchers also looked at the genders of the lead author (first person named in the author list), the senior author (the last person named), and the experts quoted in the stories based on pronouns used in the article. They used the primary academic affiliation of the corresponding author to determine the country the research came from.
What they found
Nearly 93% of the articles were based on primary research studies. The secondary studies covered were four systematic reviews, two meta-analyses and one study classified as “other.” Reporting on a single primary study increases the need for including context in reporting since individual study results may not hold up once additional research is conducted. More than a third of the studies were observational, the least reliable of study types. Clinical research studies comprised around 29% of the studies, and approximately 24% were basic research articles.
The New York Times scored highest in quality, the two Australian news sources scored lowest, and The Guardian fell in the middle. The most common omissions were limitations of the studies — especially in basic research articles — and funding sources and conflicts of interest, which none of the articles on clinical research included. Nearly half of the clinical trials covered and more than half of the observational studies covered were not peer-reviewed.
The Guardian had the greatest diversity in terms of where studies came from — only half the studies covered came from the U.K. — while nearly three-quarters of the studies covered by The New York Times and the Australian sources came from the U.S. and Australia, respectively.
As other studies have found, the gender distribution heavily favored men, which “may compromise high-quality coverage of research by limiting diversity of opinion, reinforces stereotypes and skews public visibility and recognition towards male scientists,” the authors wrote. Overall, 60% of the studies covered had male senior authors, and 68% of the quoted experts were men.
While one in five news reports weren’t focused on a specific cancer, the cancers most often covered were breast, melanoma, lung and blood cancers. The least covered were more uncommon cancers like gastric, testicular, brain and pancreatic.
The authors noted that prostate and colorectal cancer were under-represented while “cervical cancer was reported more frequently than would be expected relative to incidence.” However, a glance at the articles analyzed shows that all but one of the “cervical cancer” stories were about the HPV vaccine, which is recommended in all four countries represented by the study. It’s therefore a little misleading on the authors’ part to suggest cervical cancer was over-represented in the coverage, especially when a new formulation of the HPV vaccine was released during the time period they studied. The one non-vaccine article was about which cervical cancer screening method is most effective — which is arguably more relevant to most readers than many of the other articles considering cervical cancer screenings are recommended for all women.
This study had other limitations as well. It only covered a six-month period, and cancer research coverage may peak during cancer conferences. The analysis only covered 20 stories from each outlet, and only from four outlets. Though the authors argue that the outlets they chose “likely provide a reasonable indication of broader trends,” the reality is that these findings tell us little to nothing about how well journalists are covering research at other outlets. Still, the gaps they identified are a good reminder of what needs to be included in stories about medical studies.