Why Should we Trust Scientists?

Why Should We Believe Scientists?

Numerous societal issues and questions rely heavily on scientific information and, so too do most of their answers. Scientists continuously reveal new issues facing mankind such as climate change and the breakdown of the ozone layer, and these revelations, in addition to their discoveries, inventions and theories are constantly at the scrutiny of society, such as the effect of vaccinations and organically modified plants and animals. These debates often pose the question: Why should we trust scientists? The answer to this question is constantly debated and this post will explore the views and ideas of three prominent names: Naomi Oreskes, a famous American historian of science, Karl Popper, a famous philosopher, and Thomas Kuhn, another famous philosopher.

It is necessary to point out that a subset of this debate is whether or not science is something to be ‘believed’, something to place ones ‘faith’ in, or something to be ‘accepted’. Oreskes states that science should be contrasted with faith and that belief is in the domain of faith.[1] Popper argues it is something to put one’s faith in as it can never truly be ‘confirmed’.[2] This debate will not be explored in this post, merely recognised, as it merits a whole other post in itself.

Firstly, Oreskes[3] points to the idea of Pascal’s Wager[4]. She states that some scientists believe this idea, which is traditionally attributed to religious debates, can be applied to science. In short, it states that the negative ramifications of not ‘believing’ in science are highly outweighed by the positive results of ‘believing’ it. This is, indeed, a valid argument, but the reasoning behind it says nothing about the validity of science itself. The criticism of the argument is that it bases ‘belief’ on the consequences of that ‘belief’, rather than the actual validity of the science itself.

Oreskes points out that, in most cases, society’s ‘acceptance’ of science is based on a ‘leap of faith’, due to the fact that the general populous does not have sufficient knowledge to fully understand the reasoning behind a theory, discovery or idea, let alone that needed to challenge it. She goes on to say that this is actually true for most scientists as well, due to the incredible branching and diversity now present in the field of science. Scientists’ specialties can be often so far removed that their knowledge of another branch of science is equivalent to that of the general populous. Oreskes asks ‘So, if even scientists themselves have to make a leap of faith outside their own fields, then why do they accept the claims of other scientists? Why do they believe each other’s claims? And should we believe those claims?’[5].

Oreskes[6] explores the argument that society should ‘believe’ science due to the scientific method. This method is that of hypothetical deduction, sometimes referred to as the deductive-nomological model. According to the standard model, scientists develop hypotheses, they deduce the consequences of those hypotheses, and then they go out into the world to test if those consequences are true. If their evidence supports their hypothesis, it is said to be true. The reliability of this model is the basis for most societal ‘belief’ in science and ‘acceptance’ of the validity of theories and laws that arise from it.

However, as Popper[7] argues, the idea that the scientific method of deduction proves validity is false, due to the cognitive bias of confirmation bias. Confirmation bias or myside bias describes the tendency to search for or interpret information in a way that confirms one’s belief, hypothesis or worldview. This is a systematic error of inductive reasoning that can also lead to the interpretation of information in a biased way, in turn, leading to bias conclusions. This is linked to congruence bias that illustrates one’s overreliance on directly testing a given hypothesis as well as neglecting indirect testing, or the falsification of a given hypothesis. One searches for consequences and information that he or she would expect if his or her hypothesis was true, rather than to seek contradictions or to falsify the conclusion.

Confirmation bias was coined from Peter Wason, an English psychologist, who conducted numerous experiments into it and its effects.[8] In 1960, Wason held an experiment wherein he challenged participants to identify a rule applying to triples of numbers, being told that the sequence ‘2, 4, 6’ fitted the rule. The participants were allowed to generate their own triples and were told whether or not they fitted the rule. It was found that the participants theorised a hypothesis and generated triples that would prove or confirm their hypothesis, rather than disprove or falsify it. This led to guesses such as ‘numbers ascending by 2’ or ‘ascending even numbers’. Upon being told neither of these were correct, the participants continued guessing, going as far as to say ‘the first two numbers in the sequence are random, and the third number is the second number plus 2’ or ‘the middle number is the mean of the other numbers’ or sometimes just re-wording their original hypothesis such as ‘each number is 2 greater than its predecessor’.

The actual rule was simply ‘any ascending sequence’, but as participants sought to prove or confirm their hypothesis using direct testing, rather than disprove or falsify it using indirect testing, they constantly and repeatedly came to the wrong conclusion. It is clear from the experiment that confirmation is preferred over falsification.

A particular example of how confirmation bias resulted in societal acceptance of a false scientific theory can be seen when observing Einstein’s challenge to Newtonian physics. Newtonian physics was the most successful and important scientific theory ever to be advanced and accepted. Everything in the observable world seemed to confirm it. For more than two centuries its laws were corroborated, not just by observation, but by creative use. They became the foundation of Western science and technology, yielding marvellously accurate predictions. To generation after generation of Western man, Newton’s scientific laws were taught as definitive, incorrigible fact.

Yet, after all this, a theory contrary to Newton’s was put forward by Einstein that claimed serious attention and to go beyond Newton’s theory in its range of applications. How was it possible that a theory that, although extremely accurate and by no means wrong, had flaws and was yet somehow left untouched, unrefined for more than two centuries? The answer can be attributed directly to confirmation bias and congruence bias. For more than two centuries, confirmation bias and congruence bias had created a focal point around Newtonian physics, so much so that it was considered wrong to explore anything contrary to the status quo. However, Einstein’s challenge (eventually proved factual) proves that, not only was confirmation bias and congruence bias creating an obstacle to the evolution of knowledge, but that the success of the scientific deductive method is not a valid reason to place one’s trust in science and scientists.

The argument above is said to be the fallacy of affirming the consequent in philosophy. The cognitive bias of confirmation bias can result in false theories which can make correct predictions and result in correct results. This, states Popper[9] and Oreskes[10], is why the method of hypothetical deduction or deductive-nomological model, cannot be the basis of one’s ‘belief’ in science.

Oreskes[11] goes on to explore the idea that the presence of auxiliary hypotheses further proves flaws both the scientific deductive model, and proves why it is not a basis for one’s ‘belief’ in science. Auxiliary hypotheses are assumptions that scientists are making that they may or may not even be aware that they’re making. An example of this can be seen through the evolution in the scientific understanding of the universe. The Ptolemaic model was held as fact for many centuries as many of its predictions were verified.

When Copernicus proposed that the Earth is not the centre of the universe, and that the sun is the centre of the solar system and the Earth moves around the sun, scientists deduced that if Copernicus was right, one ought to be able to detect the motion of the Earth around the sun. This deduction gave rise to a concept known as stellar parallax[12].

The concept follows that if one were to look at a star in December, and observe the backdrop of distant stars, the same observation six months later in June, would produce a different backdrop. The angular difference produced is defined as the stellar parallax. When astronomers looked for the stellar parallax, they found nothing, which led to the scientific belief that this proved the Copernican model false.

However, in hindsight, it is known that astronomers were making two auxiliary hypotheses, both of which are now known to be incorrect. The first was an assumption about the size of the Earth’s orbit. Astronomers were assuming that the Earth’s orbit was large relative to the distance of the stars. It is now known that this relation is false. The Earth’s orbit is actually quite small. Because of this, the stellar parallax is very small and actually very hard to detect.

The second auxiliary hypothesis is that scientists were also assuming that the telescopes they had were sensitive enough to detect the parallax. This, because of the knowledge above, is now known to have been false. It wasn’t until the 19th century that scientists were able to detect the small stellar parallax. Due to the existence of auxiliary hypotheses, it is clear that, although the deductive scientific method is useful, it is certainly not a reason to base one’s belief in science on.

As, both Oreskes[13], but particularly Popper[14], explore, a lot of science is not conventional, in that it does not follow the deductive scientific method. A lot of science is inductive, meaning that scientists don’t necessarily start with theories and hypotheses, often they just start with observations of things happening in the world. Scientists also often participate in modelling, both physical and virtual (such as computer simulations), as apposed to actual testing. As the philosopher Paul Feyerabend famously said, “The only principle in science that doesn’t inhibit progress is: anything goes.” The frequency of scientific knowledge that is not based on the conventional deductive method, means that a ‘trust’ of this method cannot result in a ‘trust’ of all science.

All this falsifies the argument that society should ‘believe’ science due to the scientific method. The reliability of this model has been shown to be questionable at times and, so too, the validity of theories and laws that arise from it. So, how can one decide whether or not one should ‘believe’ science? There is no correct answer to this question, however, Oreskes[15] proposes an answer.

She asks ‘If scientists don’t use a single method, then how do they decide what’s right and what’s wrong? And who judges?’ Her answer is that scientists judge, and they judge by judging evidence. Scientists collect evidence in many different ways, but however they collect it, they have to subject it to scrutiny. Sociologist Robert Merton believe scientists scrutinize data and evidence through ‘organised scepticism’[16], as the scrutiny is organised, done collectively, as a group, from a position of distrust. The burden of proof is therefore on the person with a novel claim and, in this sense, science is intrinsically conservative.

If scientists judge evidence collectively, this has led historians to focus on the question of consensus, and to say that, at the end of the day, what science is, what scientific knowledge is, is the consensus of the scientific experts who, through this process of organized scrutiny, have judged the evidence and come to a conclusion about it, either true, or false.

We can therefore, argues Oreskes[17], think of scientific knowledge as a consensus of experts, as a jury. Unlike a conventional jury, which has only two choices, guilty or not guilty, the scientific jury actually has a number of choices. Scientists can say yes, something’s true; no, it’s false; or they can say, well it might be true but more evidence is needed to conclusively decide (said to be ‘intractable’).

But this leads us to one final problem, states Oreskes[18]: If science is what scientists say it is, then isn’t that just an appeal to authority? And isn’t appeal to authority a logical fallacy? Herein lies the paradox of modern science, that actually science is the appeal to authority, but it’s not the authority of the individual, no matter how smart that individual is. It’s the authority of the collective community. It’s based on the collective wisdom, the collective knowledge, the collective work of all of the scientists who have worked on a particular problem. One’s basis for ‘trust’ in science is actually the same as one’s basis for ‘trust’ in technology, and the same as one’s basis for ‘trust’ in anything: experience.

However, this belief shouldn’t be ‘blind trust’ any more than we would have blind trust in anything. One’s trust in science, like science itself, should be based on evidence, and that means that scientists have to continue to become better communicators and society has to become a better listener.


Confirmation Bias. September 4, 2010. https://explorable.com/confirmation-bias (accessed November 14, 2015).

Holt, Tim. Pascal’s Wager. http://www.philosophyofreligion.info/theistic-proofs/pascals-wager/ (accessed November 14, 2015).

Magee, Bryan. Popper. Penguin Books, 1973.

May, Robert. “Science as organized scepticism.” Philisophical Transactions of the Royal Society, 2011.

Oreskes, Naomi. Why we should trust scientists. May 2014. https://www.ted.com/talks/naomi_oreskes_why_we_should_believe_in_science/transcript?language=en (accessed November 14, 2015).

Parallax and Distance Measurement. 2015. http://lcogt.net/spacebook/parallax-and-distance-measurement/ (accessed November 14, 2015).

[1] (Oreskes, Why we should trust scientists, 2014)

[2] (Magee, Popper, 1973)

[3] (Oreskes 2014)

[4] (Holt n.d.)

[5] (Oreskes 2014)

[6] (Oreskes 2014)

[7] (Magee 1973)

[8] (Grosjean n.d.)

[9] (Magee 1973)

[10] (Oreskes 2014)

[11] (Oreskes 2014)

[12] (Parallax and Distance Measurement 2015)

[13] (Oreskes 2014)

[14] (Magee 1973)

[15] (Oreskes 2014)

[16] (May 2011)

[17] (Oreskes 2014)

[18] (Oreskes 2014)

Published by

Alexander BRUCE (Student Year 11)

Stanmore Year 11 Manton Student

Leave a Reply

Your email address will not be published. Required fields are marked *