Certainty with complex scientific research an unachievable goal: U of T study

Physics professor says study can help researchers better analyze their data and encourage more realistic expectations by both scientists and the public about the accuracy of scientific research
photo of data
Researcher looked at data – from the mass of an electron to the carbon dating of a sample – and found anomalous observations happened up to 100,000 times more often than expected (photo by janneke staaks via Flickr)

A University of Toronto study on uncertainty in scientific research could shed light on anomalies that arose in early attempts to discover the Higgs boson or even in predicting the outcome of the recent U.S. presidential election.

Published this week in the journal Royal Society Open Science, the study suggests that research in some of the more complex scientific disciplines, such as medicine or particle physics, often doesn’t eliminate uncertainties to the extent we might expect.

“This is due to a tendency to underestimate the chance of significant abnormalities in results,” said study author David Bailey, a physics professor in the Faculty of Arts & Science.  

He believes his study can help researchers better analyze their data, motivate more care with results and encourage more realistic expectations by both scientists and the public about the accuracy of scientific research. 

“These insights can be beneficial given the inherently complex nature of scientific research,” Bailey said. “But the chance of avoiding being wrong in some way on some level is almost impossible.” 

Looking at 41,000 measurements of 3,200 quantities – from the mass of an electron to the carbon dating of a sample – Bailey found that anomalous observations happened up to 100,000 times more often than expected.  

“The chance of large differences does not fall off exponentially as you’d expect in a normal bell curve,” Bailey said.  

A long tail of uncertainty

“The study shows that researchers in many fields do a good job of estimating the size of typical errors in their measurements but usually underestimate the chance of large errors,” said Bailey, noting that the larger-than-expected frequency of large differences may be an almost inevitable consequence of the complex nature of scientific research.

He added that as measurements become more and more accurate, the smallest things matter more and more.

“If two measurements agree, you're happy. If not, you see there's something you need to investigate,” he said. "You track down the cause of the variation and report the cause. Or you say that you don't know the cause, and this reduces the trust in your result.”

But with finite time and financial resources, researchers often have to make a choice between having a large sample of data such as tens of thousands of people in a survey and having a large number of variables.

“You start with a very large sample that just lumps everyone together. You then might have to ask if your result is the same for both men and women. Is it the same for different backgrounds, Canadians versus Americans, for example,” Bailey said. “At that point, you have to ask if your results hold for the smaller data set. Your sample is getting smaller and more can go wrong.”

Impossible not to be a little wrong?

Physics studies did not fare significantly better than medical research and other findings. However, the highly quantifiable way in which values and uncertainties are reported, may make physics more useful in terms of the degree of reproducibility of results that researchers should reasonably expect, he said.

“Scientists will still aim for the most accurate results, but their expectations of how well those aims are met may be tempered in light of this research,” Bailey said.

 

The Bulletin Brief logo

Subscribe to The Bulletin Brief