Skip To Content
Cambridge University Science Magazine
Good science is objective. This has long been the belief of scientists and non-scientists alike, at least since the philosopher and statesman Francis Bacon outlined his vision of what we might now call the ‘scientific method’ in his influential 1620 work Novum Organum. Bacon argued that we should investigate nature through inductive reasoning, carefully observing the world while maintaining scepticism so as not to be misled by our own mental impediments. Over 250 years later, Charles Darwin described himself as having “worked on true Baconian principles and without any theory collected facts on a wholesale scale”. This view of science, as the disinterested accumulation of facts free from personal prejudice, has remained a persistent force through modern history – but how accurate, or even desirable, is it really?

Given that ‘objectivity’ is such a broad and nebulous term, there are many different things that people might mean when they claim that science is – or should be – objective. For some, science is objective as long as it follows a rigorous empirical method which irons out idiosyncrasy and bias. For others, objectivity is secured through the willingness of scientists to expose themselves to peer review and have their errors painstakingly scrutinized by anonymous experts. One prominent understanding of objectivity holds that scientists should refrain from making so-called ‘non-epistemic value judgements’ – in other words, they should simply focus on facts and truth, without letting social, political or moral judgements influence their hypotheses.

This seemingly commonsensical maxim, often referred to as the ‘value-free ideal’, plays a significant role both in how scientists go about their work and in how the public perceives it. After all, it is the supposed objectivity of science that is responsible for widespread public trust in the findings of a discipline that most people don’t really understand. We are generally happy to take scientists at their word, because we assume that non-epistemic values have not factored into their research. However, if you start looking for examples of truly ‘value-free’ science, things get a little more complicated.

To understand why, consider what happens when a scientist conducts some research and is faced with the choice to accept or reject their hypothesis. The scientist might be fairly confident in their findings, but certainty is impossible – there are no deductive means of determining whether hypotheses are true or false based on experimental evidence. This means that a scientist always runs the risk of accepting a false hypothesis, or rejecting a true one. Without any objective means of choosing, the scientist must make a judgement based on their relative distaste for false positives or false negatives. In simpler terms, the question is this: would you rather take a chance on a theory and be wrong, or err on the side of caution and miss out on something right?

This dilemma, which has received extensive attention from philosophers of science in recent decades, is known as the problem of inductive risk. The philosopher Heather Douglas has argued that choices such as these unavoidably require a turn to non-epistemic values, and indeed that scientists have a moral obligation to make these decisions themselves rather than leaving it up to policymakers, given the immense authority that scientists carry in our society and their important practical role in decision-making. This suggests that value-free science is neither a realistic model, nor even an appealing one.

If this seems somewhat abstract, then it might be helpful to consider the example of climate science, a field in which questions of objectivity and inductive risk have been actively debated and have real consequences. Climate scientists grapple with high levels of uncertainty and regularly face criticisms about a lack of objectivity. Despite uncertain findings and disputes over what to do about them, the stakes are so high and the need for action so urgent that scientists are often called on to provide testimony at a speed which outpaces the formation of consensus. This leads to a tension: climate scientists must report their findings in such a way that policymakers can act on them effectively and respond to the potentially devastating impacts of climate change, but they also need to be seen as objective and maintain their authority on matters of climate science so that policymakers will continue to consult them.

This requirement for climate scientists to appear objective can create problems, in cases where more emphatic endorsement of a particular policy or less convoluted jargon would be more helpful for policymakers and stakeholders. As the philosopher Torbjørn Gundersen found in his 2020 study interviewing a number of Norwegian climate scientists, many felt that their “hands were tied” and they had to be overly cautious in their recommendations so as to be seen as objective. Reportedly, this led to the exclusion of potentially policy-relevant information from their reports. In addition, some expressed a feeling of personal conflict: although as scientists they recognised the need to be impartial and not take sides, they also felt a sense of responsibility to act as more vocal climate advocates, given their knowledge of the risks facing humanity.

A rigid adherence to unhelpful standards of objectivity might not be the best approach for scientists, but at present they are often constrained by public expectations. This can have seriously problematic consequences, when the public gets to peer behind the curtain and see how science really works. A famous example of this is the ‘Climategate’ controversy of 2009, in which hundreds of emails were leaked from the University of East Anglia’s Centre for Climate Change Research. Climate sceptics claimed that the emails revealed scientists to be engaging in improper, ‘non-scientific’ practices, such as the inference of causation from correlation or the refusal to include certain data sets in their analyses. However, in reality these were for the most part legitimate, respectable scientific practices. The problem stemmed from the public belief that scientists do not – and should not – usually act in this way.

Scandals such as these can be extremely harmful, leading people to withdraw their warranted trust in valid scientific claims when they learn of scientific practices which do not align with their expectations. As the philosopher Stephen John has argued, public trust in science can be fragile when it is based on a false ‘folk philosophy of science’, which assumes that scientists pursue the truth in an objective manner and are uninfluenced by non-epistemic values relating to social or political considerations.

A bit of disenchantment with the idea of science as objective might therefore be a good thing, both for scientists and for society. If we imagine that scientists are in the business of dispassionately investigating reality and uncovering facts in a value-free manner, we hold them to an impossibly high standard and are guaranteed to be disappointed when they inevitably fall short. In their book The Golem: What You Should Know about Science, the sociologists Harry Collins and Trevor Pinch argue that scientists are experts like any other, be they economists, weather forecasters or plumbers. Though none of these people are perfect, we do not expect them to be, and are still willing to rely on their expertise. As Collins and Pinch note, society is not rife with anti-plumbers because being anti-plumbing is not a choice available to us, and the alternative (“plumbing as immaculately conceived”) is simply not realistic. If we can accept that the same is true for science, we might all be better off.

How could we achieve such a shift in the public perception of science and objectivity? As with so many things, the solution may begin with education – there are vast untapped opportunities to teach children about the role of science in society, and expose them to the messy reality of scientific practice. Science isn’t always objective in the way we think it is, but that’s part of what makes it so valuable.

Zak Lakota-Baldwin recently graduated from St John's College, having completed a master's in History and Philosophy of Science.