“Imagine the following scenario: your news ticker ‘spits out’ two studies. One of them claims that vitamin C can also help fight against common colds. The other claims that people who take vitamin C get more infections. The question is: which of the two gets published in the paper? No contest, the second, because from a journalist’s point of view it’s the better one. We’ve been hearing about the health advantages of vitamin C for decades, so a story following those lines won’t catch anyone’s attention. But hearing that vitamin C actually makes people more ill makes a great story. Not just because ‘bad news sells,’ but because the information is shocking, unexpected and new – i.e. it can actually call itself ‘news.’ With this in mind, the huge media response to certain studies is understandable, something that – if only the data is considered and not the overall mes-sage – has probably been the cause of a great deal of media hype.
Let’s assume that both studies – the ‘vitamin C is good’ study (heard 1000 times before) and the ‘vitamin C makes you ill’ study – are both methodical and equal with regards to the data they are based upon. The difference between them, therefore, is the end result. We’ve already decided it’s the second study that will end up in the paper. Out of necessity, the other will be thrown away: ‘It would only confuse readers to have two contradictory studies in the same paper. Readers expect clear messages.’ But the journalist has already made their first mistake – they’re biased, as they’re giving a one-sided account of the current research available. They face being even more biased if they base their statements on studies of poor quality. In this case, they report something not representative of the general situation. Maybe the study in question only serves to generate hypotheses, which then need to be confirmed or refuted by actual studies. Where pos-sible, academics will exercise caution and use ‘could’ in their conclusions, i.e. talk about possibility rather than probability. But here their statements are relative, which is something journalists generally prefer to avoid.
It’s becoming clear that studies are a tricky thing. Regardless of what studies have found or not found, the role of a journalist is to make the results understandable to the public – and with as little bias and journalistic input as possible. This can be achieved by pre-selecting studies very carefully. Journalists often believe they are simply reporting on what’s going on. But more often than not, what they write is full of interpretative nuance. This bias, whether or not they are aware of it, plays a crucial role in how they handle studies, as do thoroughness and clarity. These factors in themselves are surrounded by question marks. Not many journa-lists are trained statisticians. For the most part, they rely on the information that others, for example news agencies, have published on certain studies. A survey of professional journalists showed that very few read abstracts of the scientific papers they report on, let alone read the entire paper or spoke with the authors. This is often more about efficiency than superficiality or laziness. Very few journalists actually have the time to delve into the original work, and for those that do, they quickly find that their knowledge of statistics is limited. Ultimately, many journalists have developed a split opinion about studies over the course of their careers. While they see them as a great source of scientific evidence on the one hand, on the other they know and have probably experienced how much they can be misused.
As far as studies are concerned, journalists are at the end of a long line of people. First come the resear-chers, who think up and carry out the study – and who can be led by multiple interests (research trends, sponsors, making a name for themselves in science, etc.). This chain then continues with the decision-makers – those who make the choice about which studies are put forward for publishing and which are not. This is known as publishing bias. Then come the journal editors, who, due to the sheer mass of scientific studies out there, can only publish a fraction. Next in line are the news agencies, which filter the choice down even further. Then and only then do the journalists come into play. They can’t do anything about the way in which the research has been judged on its way to their desk. It’s their role to convey the research responsi-bly and honestly. Impartial, thorough and critical – these are three qualities of a good journalist.
It doesn’t matter if they’re writing for a leading paper or the yellow press – before a journalist takes to the megaphone and announces the latest news, they should take a (self-)critical look at a study’s reliability (e.g. number of participants and study length). After all, it’s not just about a good story but about passing on serious and reliable information – and in the true spirit of the Enlightenment, which held reason to be the source of knowledge and critical thinking and scientific work as the measure of all things.
How far off the mark a knowledge multiplier and journalist can be in their interpretations of scientific publications will be demonstrated in the next three ‘Expert Opinion’ contributions, with the help of five frequently-cited studies dealing with the effects of micronutrients.”
Based on: Prof. Dr. Manfred Wilhelm and Martin Braun. People unsettled by lack of communication. Vitamin Report 2012 – Articles on provision of micronutrients. TRIAS-Verlag, 2012.