Tag Archives: Nature of Science

Absolute Certainty Is Not Scientific

That’s the title of an editorial by Daniel Botkin, president of the Center for the Study of the Environment and professor emeritus at the University of California, in today’s Wall Street Journal.

With the ongoing polarization of science in today’s political environment, it’s more important than ever to remember that science is filled with uncertainty.  Everything that scientists know about how the world works has been discovered by observation and experimentation.  None of us were around at the very beginning, so we can never be absolutely certain about how the world works, though we can be very certain that we understand how it works.

You can’t prove anything to be true in science.  This seems unintuitive to many people, including many of my former students, who used to insist that they had proven their point because the data supported their hypotheses.  But since we will never be absolutely certain about how the world works, we can never prove that any particular hypothesis or theory is absolutely true.  That’s why good scientists design experiments to disprove their hypotheses.  While you can’t prove anything to be true, you can prove things to be false.

So good scientists are forever questioning their assumptions, looking for evidence that their hypotheses and theories are wrong, open to the idea that they may have misinterpreted the data.  It’s vitally important for science teachers to remind their students to have this kind of healthy skepticism; scientific progress cannot easily proceed if people entrench themselves into opposing camps without regard for the data.

This is something that the High-Adventure Science investigations aim to do–immerse students in the data about climate change, finding extraterrestrial life, and freshwater resources–without making all-or-nothing judgements about the current state of the science.

“If you think that science is certain–well that’s just an error on your part.” ~Richard Feynman


Good Science/Bad Science

How can you tell when a scientific claim is bad?

Look at the results.  Compare the results from the models with what happened in real life.

An August 2010 study published in Science claimed that drought induced a decline in global plant productivity during the past decade, posing a threat to global food security.  Zhao and Running, the authors of that study, set up their model based on their expectations that global plant productivity would continue to increase, as it had in the 1980s and 1990s.

A new study has found that Zhao and Running’s 2010 model was flawed.

… According to the new study, their model failed miserably when tested against comparable ground measurements collected in these forests. “The large (28%) disagreement between the model’s predictions and ground truth imbues very little confidence in Zhao and Running’s results,” said Marcos Costa, coauthor, Professor of Agricultural Engineering at the Federal University of Viçosa and Coordinator of Global Change Research at the Ministry of Science and Technology, Brazil.

What went wrong?

The authors of the original study included poor quality data and did not test trends for statistical significance.  They also didn’t test their assumptions against real-life.  There was a 28% disagreement between the model’s results and real-life results–far too much to make for a useful model!

So what’s the lesson from all this?  Don’t trust scientists?  Don’t trust models?

No.  The lesson is that scientific progress is made when scientists question their own and each others’ assumptions about what they think should happen.

Could all of this have been avoided?  Yes, if Zhao and Running had better tested their model against real-life to remove, as much as possible, their biases from their work.

Scientists, like all other humans, make errors.  Question the basic assumptions of each claim, and see how the models hold up to a real-life test.  That’s how you’ll know when you’re dealing with good science.

Learn some good science in the High-Adventure Science investigations on climate, water, and space.


What makes scientists more certain?

For the past five days, Hurricane Irene affected the weather for residents on the East Coast.  For the Northeastern United States, the forecasts of the storm’s intensity turned out to be wrong; the storm weakened more than meteorologists had expected.
At the same time, the prediction of where the storm would go was very good.  Why was there such a difference between the two forecasts?
“People see that and assume we can predict everything,” National Hurricane Center senior forecaster Richard Pasch said.
“It’s frustrating when people take our forecasts verbatim and say, ‘This is where it’s going to be at this time and this is how strong it’s going to be,'” Pasch said. “Because even though the track is good it’s not certain.”
What will improve the forecasts?  More data.
The computer models that did so well as predicting the path that Irene would take use large-scale data.  “The keys to intensity changes are usually too small for big computer models,” said Georgia Tech meteorology professor Judith Curry.
Retired hurricane center director Max Mayfield says what’s needed is better real-time, small-scale information, like Doppler radar. NOAA used old propeller planes to take Doppler radar data inside Irene, but the information will be used to design better intensity forecasts in the future, he said.
With more data, meteorologists are able to make better models, which will more accurately predict the intensity of future storms.  This is applicable across all fields of science: more data leads to better models, leading to more accurate predictions of the future.

Learn about how scientists use new data to make better models of Earth’s future climate and fresh water availability with High-Adventure Science investigations.


Causality: How to Interpret Graphs

Graphs are often used to show data; they provide a very powerful way to show numerical trends.  But graphs can also be done poorly and be misinterpreted.

(Source:  http://xkcd.com/925/)

In the comic, the man in the hat has made a graph that shows the incidence of cancer in the United States with the number of cell phone users.  The incidence of cancer has been fairly steady over the past 30 years while the number of cell phone users has increased.

This means that cancer causes cell phones, right?  The graph shows that there are increases in cell phone users just as the cancer incidences start to plateau, so that conclusion makes sense, or does it?

Is there another–better–way to interpret this graph?  What does that graph really show?

Explore how good scientists draw conclusions from data in our High-Adventure Science investigations in climate, space, and water.

Wanted: Cause of the End of “Snowball Earth”

A new study has been published disproving the previous explanation for the end of the Marinoan ice age, also known as “Snowball Earth.”  That ice age ended abruptly about 600 million years ago.

The debunked explanation stated that methane bubbled up from the oceans and was consumed by microbes, which released carbon dioxide into the atmosphere, warming the Earth.  Earlier scientists had interpreted “bubbles” in the rocks as evidence of the ancient microbial activity.

A new study on those rocks showed that they were formed under very high temperatures–temperatures at which no microbes are known to survive.  In addition, better dating of the rocks showed that the “bubbles” were formed millions or tens of millions of years after the end of the ice age.

So scientists still don’t have an explanation for the end of “Snowball Earth.”  But they do know a couple of things that didn’t cause the end of the ice age.

As scientists come up with new explanations for the end of the ice age, those explanations will be tested by other scientists.  When explanations can be disproved with evidence, science moves forward.  We may never discover the true cause of the end of “Snowball Earth,” but one thing’s for sure–we’ll know a lot more about how the Earth works by trying to craft a good explanation.  That’s the way science works!


Thinking like a scientist

Nearly every day, newspapers report on new scientific breakthroughs.  Scientists provide measures of their uncertainty in the results, expressed as a p-value.

The p-value is a statistical measure of the randomness of the results; a lower p-value indicates that the reported result is not likely due to chance.  In scientific studies, a p-value of 0.05 (or 5% likelihood of the result being due to random chance) is considered significant. Put another way, if the same test were done 100 times, a result would have to happen 95 times out of 100 to be considered significant.

"'So we did the study again, and got no link. It was probably a--' RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!"


Scientists test their hypotheses multiple times to be sure of the significance of their results.  Even though one test may reach a significant p-value, there’s still that 5% chance that it could be due to chance.

Unfortunately, that doesn’t make for good newspaper headlines.  So, when you read news about science, think like a scientist and look at the data and results with a scientifically-critical eye.


From xkcd: http://xkcd.com/263/

Question: How can we trust ourselves (or scientists) to know the truth about anything?

Answer: We look at the evidence.

Scientists back up their claims with evidence.  If the evidence doesn’t fit the claim, then the claim is rejected and revised.  New evidence can result in changes to long-held understandings about how the world works–it is the evidence that rules in the scientific process!

Through experiments and models, scientists test their hypotheses to learn more about how the world works.

Will we ever be totally certain about how the world works?  Nope.  But that just means that there will always be something to discover!

Test your own hypotheses with models in the High-Adventure Science curriculum modules: “What will Earth’s climate be in the future?” and “Is there life outside of Earth?“.