Introduction to
Hypoxia or Hubris?
How Computer-Models Are Creating Bad Science
Errors of mathematics in predicting hypoxia in the Gulf of Mexico

by Douglas Moreman of Baton Rouge, Louisiana
This page was modified on
Link to The Main Article and Home of Hubris

The Warning of von Neumann's Elephant

Warning of a possible misuse of computer-models, John von Neumann said to physicist Enrico Fermi:

"With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk."

John von Neumann has been judged to be one of the most creative mathematicians of all time. One of his many creations was a fundamental part of the Computer Revolution we see in motion around us. When they were still very young, Von Neumann warned us of a specific misuse of computers. Sciences and economies are suffering today from a failure to heed his warning. The failure is wide-spread - nearly universal, it seems, in some of the academic/governmental complexes of science.

In some areas of science, rather than understand that warning, wriggling the trunks of imaginary elephants is taken as a sign of advanced and reliable "research" - when the opposite is true.

This web site dissects one really good example of how bad modeling works. As you see how it works, bear in mind that the lead author of the example was the Chief Scientist of a division of NOAA and others of its authors had international reputations.

This is a graph of a quantity Z with respect to a quantity N. As you can see without arithmetic, the correlation of Z to N is about 0. Yet, it has been strongly believed among a grouping of imminent scientists that Z increases with N. Advice has been given to governments based upon this belief, and turned into government action. You might ask: is it possible for an entire field of science, albeit a small field, to run, herd-like, into a blind alley? And to take part of the government of the United States with it? If you have enough sense of mathematics to understand this graph, you will be able to that the herd failed to see the trap. And their first error was in not including that graph in their article.

We will see in the main example-article, taken from science of the Gulf of Mexico, that "successful" multi-parameter curve-fitting swelled an already exaggerated certainty in a plausible but not proven theory.

In addition to the clear error of von Neumann's Elephant, the acclaimed article has other errors, detailed in Hypoxia or Hubris, that seem to have gone unnoticed by the "peers":

* mere curve-fitting of past events was mistaken for confirmational "prediction,"
* testable implications of the model were ignored,
* historical graphs that, clearly viewed, argue against the long-term predictions were seen as favorable to them, and
* ill-conceived predictions far into the future were packaged into advice given to a powerful agency of government.

The model, Scavia 2003, dissected in this web site, ought never have been taken seriously because is it clearly too simple for the phenomena it represented. At least one of the authors seems to have been pleasantly astonished that such a simple model could "predict" the existing data so successfully.

But, in its simplicity, the model implies impossible physical conditions. Had anyone thought carefully about its equations, this implication might have warned him to fix or to abandon the model. Rather, in the error that von Neumann had warned us against, the model's parameters were selected so that the model "successfully" matched some data. This matching of old data was called "prediction." The model and its theory were then assumed to have been confirmed well enough to be used in advising government.

The model implies that the concentration of oxygen is sometimes negative. Why would this not disprove the model?

The model implies that, for several numbers Y, Y is the same as Y - 1.

You can study this implication in the larger article Hypoxia or Hubris. For very practical reasons, no real science allows its theories to imply falsehoods.

As hinted at in the graph above, even before the computer-model existed, a proper graphing of its raw data suggested, had anyone un-blinded by reasoned expectation looked at it, that the theory is false.


Are we being told something important by this fact: clear errors of mathematical reasoning were not detected by "the peers" and the article was published in a prominent journal? What other errors lurk in our environmental sciences? Are any of their theories and models, that depend upon careful reasoning, actually correct? How much of the nation's economy are they causing to be squandered by their bad advice to governments?

The example dissected in this website, though complex in some ignorable details, might be the simplest of examples of widely acclaimed, peer-accepted bad science. Thus, it might provide a good learning experience and warning for thinking people - even those who have no particular interest in "dead zones." The errors might be trivial compared to some similar errors in other environmental sciences. But, they are accessible and suggest, clearly I think, that profound errors are being made routinely in modern science.

Being blinded by Belief-Based Hubris is a human condition. Scientists, too, are human. Some of our Beliefs are made particularly plausible by reasoning from empirical roots - but history shows that objectivity and rationality are not perfect defenses against powerfull, mind-melding social instincts.

The multiply-flawed model shows signs of being the product of a peer-clique that is blinded by a shared certainty. It
.
* passed "peer-review,"
* was extolled and referenced by 145 papers,
* was listed for years on a government web-site as a virtuously emulatable example for environmental scientists, and
* was used as a basis of decisions in government.

The societal success of the model, Scavia 2003, and of its theory of a single springtime cause of hypoxia in the Gulf of Mexico, remind us that people often move in mutually confirming "herds." I believe that participating in "herds" and "cliques" is instinctive in all of us, and, therefore, forgivable.
A peerclique in Science is a clique such that reviewers of prospective journal-articles share a defining beliefplex of that clique. Heavy jargon and peer-snobbery are barriers they put up, instinctively it seems, to communication with the outside world. Such barriers insulate peercliques from detection of their shared errors. In the case of some Environmental Sciences, the unintentional shutting out of people of real mathematical ability and training has allowed accumulation of significant errors that might otherwise have been avoided.
Some cliques in Science, though having no evil intent, have been giving bad advice to government for decades. Harm is being done by a systemic failure of Science. Much of what we have been told is so just isn't so. How can such mistakes happen?

Study Hypoxia or Hubris? and find out.

Withering rebuttals are welcome:
doug@dolpnininspiredsonar.com