2 schools, 50 students, 10 creative solutions….. the Group 4 project 2015

From 5 to 6 November, Year 11 IB Science students joined forces with 15 students from Al Zahra College to complete the IB Group 4 Project – a mandatory component of the IB Science Course. This Project allows students to gain experience testing out their scientific ideas and to work collaboratively with others across various disciplines – physics, chemistry, biology and environmental science.
This year’s Project looked at some of the challenges boys at our brother school, Tupou College in Tonga might face.

Tupou
Year 11 student Eric Sheng (11/ME) said his group investigated clean water. “My group looked at the need for clean drinking water. We designed and made a model of a solar-based water purification system.”
Some other student initiatives included a solar water heating system so that students could have warm showers, turbine generators to create power, biogas collection, wind turbine generators and desks that also double up as hammocks.
Eric said that while it took two weeks of planning to be able to put their designs to the test, the results were not always what they expected.
“Not all the ideas survived this rigorous process [of testing that took part after the two weeks of preparation]; every group encountered problems and had to think of new ways to do things. Unlike most of what we do in classrooms, there are no tried and tested instructions or correct answers in the Group 4 Project,” he said.
“I for one found improvising solutions to unexpected problems the most exciting part of the Project… As the IB emphasises, the focus is on the processes rather than the products of the experience. The Group 4 Project was certainly a valuable learning experience.”
For fellow classmate Fletcher Howell (11/JN), the Project tested his ability to be open minded, flexible and caring towards the working styles of others in the group. “Students were also required to be good communicators, creating a short film for presentation at the conclusion of the day. This was aimed at outlining the processes and the methods through which any thinking was applied,” he said.
“For me personally, I realised that getting a group of five people motivated towards achieving a collective goal is difficult. There is a lot of work that goes into generating a solution, let alone ensuring it is practical in that it is cost-effective, safe and made out of the appropriate materials. The project was a worthwhile thinking exercise and outlined how the theory and skills we develop in class, actually apply to the real world.”
A 30 minute reflection task outlining the task and results marked the conclusion of the intensive two days of testing and discovery. The reflection allowed students to consolidate their thinking and consider their personal development. Being an entirely student-centred project meant that this was not only invaluable to the students learning about team work and skills of management and delegation, but also offered the students a chance to plan, test and reflect on real world solutions to real world problems.

Why Should we Trust Scientists?

Why Should We Believe Scientists?

Numerous societal issues and questions rely heavily on scientific information and, so too do most of their answers. Scientists continuously reveal new issues facing mankind such as climate change and the breakdown of the ozone layer, and these revelations, in addition to their discoveries, inventions and theories are constantly at the scrutiny of society, such as the effect of vaccinations and organically modified plants and animals. These debates often pose the question: Why should we trust scientists? The answer to this question is constantly debated and this post will explore the views and ideas of three prominent names: Naomi Oreskes, a famous American historian of science, Karl Popper, a famous philosopher, and Thomas Kuhn, another famous philosopher.

It is necessary to point out that a subset of this debate is whether or not science is something to be ‘believed’, something to place ones ‘faith’ in, or something to be ‘accepted’. Oreskes states that science should be contrasted with faith and that belief is in the domain of faith.[1] Popper argues it is something to put one’s faith in as it can never truly be ‘confirmed’.[2] This debate will not be explored in this post, merely recognised, as it merits a whole other post in itself.

Firstly, Oreskes[3] points to the idea of Pascal’s Wager[4]. She states that some scientists believe this idea, which is traditionally attributed to religious debates, can be applied to science. In short, it states that the negative ramifications of not ‘believing’ in science are highly outweighed by the positive results of ‘believing’ it. This is, indeed, a valid argument, but the reasoning behind it says nothing about the validity of science itself. The criticism of the argument is that it bases ‘belief’ on the consequences of that ‘belief’, rather than the actual validity of the science itself.

Oreskes points out that, in most cases, society’s ‘acceptance’ of science is based on a ‘leap of faith’, due to the fact that the general populous does not have sufficient knowledge to fully understand the reasoning behind a theory, discovery or idea, let alone that needed to challenge it. She goes on to say that this is actually true for most scientists as well, due to the incredible branching and diversity now present in the field of science. Scientists’ specialties can be often so far removed that their knowledge of another branch of science is equivalent to that of the general populous. Oreskes asks ‘So, if even scientists themselves have to make a leap of faith outside their own fields, then why do they accept the claims of other scientists? Why do they believe each other’s claims? And should we believe those claims?’[5].

Oreskes[6] explores the argument that society should ‘believe’ science due to the scientific method. This method is that of hypothetical deduction, sometimes referred to as the deductive-nomological model. According to the standard model, scientists develop hypotheses, they deduce the consequences of those hypotheses, and then they go out into the world to test if those consequences are true. If their evidence supports their hypothesis, it is said to be true. The reliability of this model is the basis for most societal ‘belief’ in science and ‘acceptance’ of the validity of theories and laws that arise from it.

However, as Popper[7] argues, the idea that the scientific method of deduction proves validity is false, due to the cognitive bias of confirmation bias. Confirmation bias or myside bias describes the tendency to search for or interpret information in a way that confirms one’s belief, hypothesis or worldview. This is a systematic error of inductive reasoning that can also lead to the interpretation of information in a biased way, in turn, leading to bias conclusions. This is linked to congruence bias that illustrates one’s overreliance on directly testing a given hypothesis as well as neglecting indirect testing, or the falsification of a given hypothesis. One searches for consequences and information that he or she would expect if his or her hypothesis was true, rather than to seek contradictions or to falsify the conclusion.

Confirmation bias was coined from Peter Wason, an English psychologist, who conducted numerous experiments into it and its effects.[8] In 1960, Wason held an experiment wherein he challenged participants to identify a rule applying to triples of numbers, being told that the sequence ‘2, 4, 6’ fitted the rule. The participants were allowed to generate their own triples and were told whether or not they fitted the rule. It was found that the participants theorised a hypothesis and generated triples that would prove or confirm their hypothesis, rather than disprove or falsify it. This led to guesses such as ‘numbers ascending by 2’ or ‘ascending even numbers’. Upon being told neither of these were correct, the participants continued guessing, going as far as to say ‘the first two numbers in the sequence are random, and the third number is the second number plus 2’ or ‘the middle number is the mean of the other numbers’ or sometimes just re-wording their original hypothesis such as ‘each number is 2 greater than its predecessor’.

The actual rule was simply ‘any ascending sequence’, but as participants sought to prove or confirm their hypothesis using direct testing, rather than disprove or falsify it using indirect testing, they constantly and repeatedly came to the wrong conclusion. It is clear from the experiment that confirmation is preferred over falsification.

A particular example of how confirmation bias resulted in societal acceptance of a false scientific theory can be seen when observing Einstein’s challenge to Newtonian physics. Newtonian physics was the most successful and important scientific theory ever to be advanced and accepted. Everything in the observable world seemed to confirm it. For more than two centuries its laws were corroborated, not just by observation, but by creative use. They became the foundation of Western science and technology, yielding marvellously accurate predictions. To generation after generation of Western man, Newton’s scientific laws were taught as definitive, incorrigible fact.

Yet, after all this, a theory contrary to Newton’s was put forward by Einstein that claimed serious attention and to go beyond Newton’s theory in its range of applications. How was it possible that a theory that, although extremely accurate and by no means wrong, had flaws and was yet somehow left untouched, unrefined for more than two centuries? The answer can be attributed directly to confirmation bias and congruence bias. For more than two centuries, confirmation bias and congruence bias had created a focal point around Newtonian physics, so much so that it was considered wrong to explore anything contrary to the status quo. However, Einstein’s challenge (eventually proved factual) proves that, not only was confirmation bias and congruence bias creating an obstacle to the evolution of knowledge, but that the success of the scientific deductive method is not a valid reason to place one’s trust in science and scientists.

The argument above is said to be the fallacy of affirming the consequent in philosophy. The cognitive bias of confirmation bias can result in false theories which can make correct predictions and result in correct results. This, states Popper[9] and Oreskes[10], is why the method of hypothetical deduction or deductive-nomological model, cannot be the basis of one’s ‘belief’ in science.

Oreskes[11] goes on to explore the idea that the presence of auxiliary hypotheses further proves flaws both the scientific deductive model, and proves why it is not a basis for one’s ‘belief’ in science. Auxiliary hypotheses are assumptions that scientists are making that they may or may not even be aware that they’re making. An example of this can be seen through the evolution in the scientific understanding of the universe. The Ptolemaic model was held as fact for many centuries as many of its predictions were verified.

When Copernicus proposed that the Earth is not the centre of the universe, and that the sun is the centre of the solar system and the Earth moves around the sun, scientists deduced that if Copernicus was right, one ought to be able to detect the motion of the Earth around the sun. This deduction gave rise to a concept known as stellar parallax[12].

The concept follows that if one were to look at a star in December, and observe the backdrop of distant stars, the same observation six months later in June, would produce a different backdrop. The angular difference produced is defined as the stellar parallax. When astronomers looked for the stellar parallax, they found nothing, which led to the scientific belief that this proved the Copernican model false.

However, in hindsight, it is known that astronomers were making two auxiliary hypotheses, both of which are now known to be incorrect. The first was an assumption about the size of the Earth’s orbit. Astronomers were assuming that the Earth’s orbit was large relative to the distance of the stars. It is now known that this relation is false. The Earth’s orbit is actually quite small. Because of this, the stellar parallax is very small and actually very hard to detect.

The second auxiliary hypothesis is that scientists were also assuming that the telescopes they had were sensitive enough to detect the parallax. This, because of the knowledge above, is now known to have been false. It wasn’t until the 19th century that scientists were able to detect the small stellar parallax. Due to the existence of auxiliary hypotheses, it is clear that, although the deductive scientific method is useful, it is certainly not a reason to base one’s belief in science on.

As, both Oreskes[13], but particularly Popper[14], explore, a lot of science is not conventional, in that it does not follow the deductive scientific method. A lot of science is inductive, meaning that scientists don’t necessarily start with theories and hypotheses, often they just start with observations of things happening in the world. Scientists also often participate in modelling, both physical and virtual (such as computer simulations), as apposed to actual testing. As the philosopher Paul Feyerabend famously said, “The only principle in science that doesn’t inhibit progress is: anything goes.” The frequency of scientific knowledge that is not based on the conventional deductive method, means that a ‘trust’ of this method cannot result in a ‘trust’ of all science.

All this falsifies the argument that society should ‘believe’ science due to the scientific method. The reliability of this model has been shown to be questionable at times and, so too, the validity of theories and laws that arise from it. So, how can one decide whether or not one should ‘believe’ science? There is no correct answer to this question, however, Oreskes[15] proposes an answer.

She asks ‘If scientists don’t use a single method, then how do they decide what’s right and what’s wrong? And who judges?’ Her answer is that scientists judge, and they judge by judging evidence. Scientists collect evidence in many different ways, but however they collect it, they have to subject it to scrutiny. Sociologist Robert Merton believe scientists scrutinize data and evidence through ‘organised scepticism’[16], as the scrutiny is organised, done collectively, as a group, from a position of distrust. The burden of proof is therefore on the person with a novel claim and, in this sense, science is intrinsically conservative.

If scientists judge evidence collectively, this has led historians to focus on the question of consensus, and to say that, at the end of the day, what science is, what scientific knowledge is, is the consensus of the scientific experts who, through this process of organized scrutiny, have judged the evidence and come to a conclusion about it, either true, or false.

We can therefore, argues Oreskes[17], think of scientific knowledge as a consensus of experts, as a jury. Unlike a conventional jury, which has only two choices, guilty or not guilty, the scientific jury actually has a number of choices. Scientists can say yes, something’s true; no, it’s false; or they can say, well it might be true but more evidence is needed to conclusively decide (said to be ‘intractable’).

But this leads us to one final problem, states Oreskes[18]: If science is what scientists say it is, then isn’t that just an appeal to authority? And isn’t appeal to authority a logical fallacy? Herein lies the paradox of modern science, that actually science is the appeal to authority, but it’s not the authority of the individual, no matter how smart that individual is. It’s the authority of the collective community. It’s based on the collective wisdom, the collective knowledge, the collective work of all of the scientists who have worked on a particular problem. One’s basis for ‘trust’ in science is actually the same as one’s basis for ‘trust’ in technology, and the same as one’s basis for ‘trust’ in anything: experience.

However, this belief shouldn’t be ‘blind trust’ any more than we would have blind trust in anything. One’s trust in science, like science itself, should be based on evidence, and that means that scientists have to continue to become better communicators and society has to become a better listener.

Bibliography

Confirmation Bias. September 4, 2010. https://explorable.com/confirmation-bias (accessed November 14, 2015).

Holt, Tim. Pascal’s Wager. http://www.philosophyofreligion.info/theistic-proofs/pascals-wager/ (accessed November 14, 2015).

Magee, Bryan. Popper. Penguin Books, 1973.

May, Robert. “Science as organized scepticism.” Philisophical Transactions of the Royal Society, 2011.

Oreskes, Naomi. Why we should trust scientists. May 2014. https://www.ted.com/talks/naomi_oreskes_why_we_should_believe_in_science/transcript?language=en (accessed November 14, 2015).

Parallax and Distance Measurement. 2015. http://lcogt.net/spacebook/parallax-and-distance-measurement/ (accessed November 14, 2015).

[1] (Oreskes, Why we should trust scientists, 2014)

[2] (Magee, Popper, 1973)

[3] (Oreskes 2014)

[4] (Holt n.d.)

[5] (Oreskes 2014)

[6] (Oreskes 2014)

[7] (Magee 1973)

[8] (Grosjean n.d.)

[9] (Magee 1973)

[10] (Oreskes 2014)

[11] (Oreskes 2014)

[12] (Parallax and Distance Measurement 2015)

[13] (Oreskes 2014)

[14] (Magee 1973)

[15] (Oreskes 2014)

[16] (May 2011)

[17] (Oreskes 2014)

[18] (Oreskes 2014)

Is ignorance more important than knowledge?

Scientific uncertainty is an accepted and important component of scientific research. It could be argued that it is an essential factor in the drive for research. We research because we don’t know everything. The results of that research then have to be assessed for validity and scientists make an estimate about how confident they are in the picture revealed. Their level of confidence in the picture depends on the level of uncertainty in the data.

But is this how uncertainty is perceived in the community? Does the general public regard uncertainty as deficiency? When making public policy, is uncertainty worrying and a reason to be cynical about scientific data?

What does the choice of wording and language in the following two statements indicate about the authors belief systems?

Screen Shot 2015-08-27 at 10.04.36 am

It is also clear that some science is more certain than other science, and the level of uncertainty may vary even within one issue. David Stainforth speaks below about the topical issue of climate change. He maintains that there is some data on climate change that the world of science is very certain of and in which there is a great deal of confidence.

Screen Shot 2015-08-27 at 10.04.44 am

Marlys H. Witte was a professor of surgery at a university in Arizona. She proposed teaching a class called “Introduction to Medical and Other Ignorance” in the mid 1980’s. She wanted students to work on interesting ambiguities. Her idea was not well received.

Einstein was skeptical about the value of knowledge and Voltaire about the value of certainty.

Screen Shot 2015-08-27 at 10.04.54 am

Screen Shot 2015-08-27 at 10.05.04 am

Jamie Holmes authored an article for the New York Times in which he made the following claim:

“Presenting ignorance as less extensive than it is, knowledge as more solid and more stable, and discovery as neater also leads students to misunderstand the interplay between answers and question.

People tend to think of not knowing as something to be wiped out or overcome, as if ignorance were simply the absence of knowledge. But answers don’t merely resolve questions; they provoke new ones.”

You can read his New York article here : The Case for Teaching Ignorance

So what does this mean for you as students of science? Does it mean that teachers should engage with the subject matter in a more tentative fashion? Should they be less sure that they have all the answers to your questions? Should you view your own ignorance about your subject as cause for celebration? Will this ignorance allow you to explore curiously? Should the work of educators be more about ignorance and uncertainty and less about certainty in what we know?

Screen Shot 2015-08-27 at 10.05.14 am

Stating uncertainties in scientific data that you collect is one way for you to communicate the level of confidence you have in your data based on the precision of the method you chose.  At least now though you can celebrate that uncertainty and rather than perceive is as a deficiency, embrace its power to keep you asking questions!

 

Bibliography:

“Scientific uncertainty and global warming.” NAGT. N.p., n.d. Web. 26 Aug. 2015.
<http://serc.carleton.edu/NAGTWorkshops/affective/dilemmas/16699.html>.

Website article: SENSE ABOUT SCIENCE MAKING SENSE OF UNCERTAINTY Why uncertainty is part of science’

<http://www.lse.ac.uk/CATS/Media/SAS012-MakingSenseofUncertainty.pdf>