Advertisement
X

Teotwawki: Okay If We Confirm That Tomorrow?

What if the science of prediction is a fictive process and nothing more? Is the future, then, even if loosely knowable, specifically unpredictable?

There is little disputing Robert S. Cohen’s statement that “much of our intellectual life, and increasingly large portions of our social and political life, rest on the assumption that we (or, if not we ourselves, then someone whom we trust in these matters) can tell the difference between science and its counterfeit”.

But what if we cannot? What if the respect we pay to the science of prediction is respect to a fictive process and little more? What if the untrained human mind is unable to tell the difference be­t­­w­een “science and its counterfeit” (just as it is unable to tell the difference between magic and its counterfeit, sleight-of-hand [essentially, between one counterfeit and its counterfeit])?

In predictive mechanisms (as in just about everything else today), the problem is that of boundaries. The demarcation problem in the philosophy of science seeks to address what is science and what is non-science (including anti-science, pse­udo-science, beliefs, the arts and literature). This article disdains, by virtue of what it focuses on, anti-scientific predictive mechanisms. By virtue of the same, it must therefore depend upon Larry Laudan’s prescription that “above all, to have science one must have apodictic certainty.” (‘The Demise of the Demarcation Problem’, in Cohen, R.S.; Laudan, L., Physics, Philosophy and Psychoanalysis: Essays in Honor of Adolf Grünbaum)

In effect, science must have, as both witness and ­componentry, the character of the evidence upon which it stands. As ever, Aristotle beat us to it: αποδεικτικό χαρακτήρας (‘apodíctico charaktíras’) translates as “the character of evidence”. It refers to clearly demonstrable or clearly undemonstrable propositions. In predictive science, the character of evidence is important, seeing as how evidence of a future predicted will work backwards to establish the verity of the evidence used to predict that future.

And this is where we come to the crux of the problem: The future, being what it is­—an enigma behind a veil across a sea of suppositions and desires—is also clearly undemonstrable. But is the future, though broadly knowable, specifically ­unpredictable? Perhaps, just like the past, the future is hidden by a swirling storm of pixellated data.

There is little point here in placing the tatters of Nostradamus under an electron microscope (the usual one having failed to spot anything convincing). Nor is there any point in pushing the Mayan Weltenende scenario, which, having failed to polish hum­ankind off in 2012, has now been reread to mean 2047.

Advertisement

Except that 2047 deserves a mention: It is the year (give or take a year or two) when everything is set to change irrevocably. The planetary climate regime: the finding of Camilo Mora of the Uni­versity of Hawaii that climatic temperature will be “unprecedented”.  In The Singul­arity, Ray Kurzweil has runaway AI pegged at 2045­—and it probably won’t be all good, although it probably might, who knows. Certainly, oil, acc­ording to a BP prediction, will last us until 2047. After that year, everything seems up for grabs. The problem with this is specificity­—and the fact that previous scientific predictions have all bombed. Take oil:

  • In 1914, the US bureau of mines predicted a consolidated future production limit of 5.7 billion barrels of oil—a 10-year supply.
  • In 1939, the department of the interior predicted that oil reserves in the continental United States would last no more than 13 years.
  • In 1951—12 years later—the dep­artment of the interior’s oil and gas division extended its oil-end doomsday prediction by another 13 years.
Advertisement
  • In 1972, within three years of its founding, in its significant publication, Limits to Growth, the Club of Rome predicted: “The world will run out of oil by 1991.”
  • In 2007, BP’s Statistical Review of World Energy showed that the world had enough “proven” res­erves to last another 40 years, at current rates of consumption.

Nobody who is not a right-wing nut or has not financially benefited from the open-handed largesse of oilcos is in any doubt that these catastrophes lie in wait in humankind’s fut­ure. But, equally, nobody knows when it will all come to pass.

***

TEOTWAWKI (The End Of The World As We Know It) is an absolute favourite of science fiction writers, readers, environmentalists, political scientists and wuthering wonks. It is also a favourite of data-believers such as I: We know the end of the world is nigh; we just don’t presume to know when ‘nigh’ is.

Advertisement

In an issue of Wired magazine (April 2000), Bill Joy, co-f­ounder and chief scientist, Sun Microsystems, predicted the end of humankind by 2030, at the outside. The very cutting-­edge technologies that Kurzweil has exampled as components of the Singularity­—genetech, robotics, nanotech­—Joy marked as harbingers of the human extinction event.

There is solid scientific grounding for the essential nonsensicality of prediction models. In their 2006 paper, ‘Callimac­hus: Avoiding the Pitfalls of XML for Collaborative Text Analysis’ (Literary and Linguistic Computing)—the conclusion of ‘Callimachus: A Virtual Archivist for Electronic Markup Projects’­—Jeff Smith, Joel DeShaye and Peter Stoicheff managed to shoot down predictive models as inherently hamstrung by their reliance on past data to generate future data.

The Callimachus project at the University of Saskatchewan was conceived as a way to merge “the robust scalability of formal database technologies with the expressive power and humanist-friendly accessibility of HTML and XML schema”. It was first applied to a hypertext of William Faulkner’s The Sound and the Fury.

Advertisement

“Callimachus was designed to allow the possibility of using a true database application to mark up Faulkner’s 1929 novel (The Sound and the Fury) on a token (or word) level.... With this custom database and interface, we can discover when and how a concept appears in the novel. We can discover which characters dwell on what concepts and to what extent. We can discover how many words (how much narrative space) characters use when talking or thinking about specific topics. We can display relationships with charts and graphs computed with any combination of variables.... Faulkner, telling this part of the story through the mind of an idiot, normally provides only obscure clues to mark the mnemonic flashing from one event to ­another. The narrative does not follow the chronological ­sequence of events in the novel. However, using HTML and JavaScript to tag each event, we built an interface that links events in the narrative sequence with events in a chronologically correct version of the text. For the first time, readers could reorient themselves in the chronology by clicking a button, leaving behind the much more confusing original narrative. The hyp­ertext edition helped us clarify our understanding of the novel and yielded some surprising results. We knew that the idiot narrator, named Benjy, would relive an event (such as his grandmother’s death), would trigger a ­seq­uence of flashbacks, and would often repeatedly return to that initial event. Benjy’s memory of his grandmother’s death is interrupted 17 times by other flashbacks. When we isolate this event from the interruptions, we notice that it is transmitted chronologically. Hidden in the chaos of so many rel­ived events are small, coherent, chronological narratives.” (Vincent Neyt, Review of Stoicheff, Muri, Deshaye, et al (eds): The Sound and the Fury: A Hypertext Edition; Literary and Linguistic Computing, 2004)

It reads all good—except that two years after this review was published, Smith, DeShaye and Stoicheff went on to coin the now ubiquitous term, “the fallacy of prescience”. It means that the Callimachus project used iffy predictive mechanicals—making inferences about ideas and micro-narratives yet to be ‘discovered’ in order to build the very schema, the mechanism, needed to describe them.

In other words, the mechanism of prediction predicts the future. If the basis of the mechanism is ballsed-up, one can expect the reading of the future to be muddled, too. (It gets worse if we bring into the picture Big Data, the much-glorified basis of predictive science today—­because few things muddle the statistical waters like Big Data.)

The fallacy of prescience, of course, has illustrious foreparents, all of them applicable, in one form or another, to speculative fiction and non-speculative, ‘predictive’ science. The fallacy of circular reasoning (circulus in probando) ends where “the reasoner begins with what he or she is trying to end up with”. The fallacy of the irrelevant conclusion (ignoratio ele­nchi) is an argument that prima facie seems to address an issue head-on but, in fact, dodges and ducks like crazy. It is a standing favourite in the social media. This fallacy partakes hungrily of the gambler’s fallacy, which holds that because something has occurred often within a given period, it will not in future; or because it has not occurred frequently, it is bound to happen. Murphy’s law of the raw deal—“If something rotten can happen, it will”—is an extension of the gambler’s fallacy. As is the ‘extinction event prediction’, which holds that because there have already been five extinction events, another (the Homocene, or Anthropocene, event) is just about here.

In a paper, ‘Accelerated modern human-induced species losses: Entering the sixth mass extinction’ (Science Advances, June 2015), the editors, including Stanford biologist Paul Ehrlich, wrote that “the number of species that have gone extinct in the last century would have taken, depending on the vertebrate taxon, between 800 and 10,000 years to disappear”. The paper went on to emphasise: “These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, ­indicating that a sixth mass extinction is already under way.”

But, as ever, arithmetic does us no good. In the paper quoted above, positing a period­—which is nominal, anyway­—of “bet­ween 800 and 10,000 years” is equal to scientific whatiffery. There is no denying that something earth-altering is under way—geosocietal anecdotal evidence is on firmer ground here than scientific evidence—but there is no saying what shape it will eventually take or when it will happen.

This is where another fallacy­—a superfallacy, by any yardstick—comes into play: Nassim Nicholas Taleb’s ‘Ludic Fallacy’—explicated in his 2007 book, The Black Swan—which is “the misuse of games to model real-life situations”. This is something that Taleb figured out on the back of an envelope in a casino. Not unsurprisingly, then, Taleb called his Ludic Fallacy the act of “basing studies of chance on the narrow world of games and dice” (which, being one of a kind with Einstein’s huffy, “God does not play dice with the universe”, makes one wonder if scientists are not, fundamentally, gamblers).

The adjective ludic originates from the Latin noun ludi (games) and ludus (play, sport, pastime). Ludus also means prank, jest, and diversion, so ‘the ludic fallacy’ might actually be more critical in both intent and content than is usually presumed.

In an earlier book, Fooled By Randomness: The Hidden Role of Chance in Life and the Markets (2001), Taleb had pretty much definitively established that predictive qualities are often perceived—but not present—in “random patterns”, formed by pure chance.

Explaining Taleb, François Daniel Sicart, founder-chairman of Tocqueville Management Corporation, wrote (2007) that “a generalisation such as ‘all swans are white’ merely means that, up to now, only white swans have been observed. But it is enough for one black swan to appear for this conclusion to become ent­irely false. The odds of encountering a white swan are then irrevocably altered. Unfortunately, until a black swan has been observed, no amount of information collected about white swans can help us assess the odds of there being black swans.”

And there’s the point: Until black swans were ‘discovered’ in Australia in 1697, no one knew they existed, because nob­ody had seen them before. Thereafter, however, Cygnus atratus became part of—of all things—European culture: black swans became emblematically common. Suddenly, everyone knew someone who knew someone who had seen or kept a flock.

Taleb set a fox among the hens. He ripped apart the book on predictive science. He separated scientific predictability from its prime motivator: human expectation. And he established the “non-computability of the probability of the consequential rare events using scientific methods”.

Taleb’s prescription worked, even in hindsight. In December 1995, in a well-known piece for InfoWorld, Melancton ‘Bob’ Metcalfe predicted that, in 1996, the internet would collapse under “gigalapses” caused by massive data traffic jams. As co-founder of the Ethernet and founder of 3Com, Metcalfe wasn’t fooling around with rubbish doomsaying: the state of the Net had, indeed, raised flags among the Big Data doyens of the digitalverse.

And, yet, no collapse came to pass. If anything, the internet only grew stronger. Among other things, the internet arrived in India on August 15, 1995.

And, at the Sixth Annual World Wide Web Conference in 1997, Metcalfe ate his words—literally, by putting his article in a blender and downing it.

***

Almost everything in modern society is about predictive analysis, or how to beat the odds: education (what must I study today to land a good job a decade hence?); jobs (what must I do today to ensure promotivity to a great position five years hence?); health (what vitamins must I pop to be in perfect health when my peers are falling like ninepins all around me?); marketing (I gotta go intercontinental two years from now, so what data do I gather and how must I cru­nch it?); human expansion (next stop Mars: where’s the astrophysical data?); population (if the census is so exact, why is it always wrong?); development policies (why can’t I get the right data to formulate working decadal policies?). Et cetera.

All these hairy decisions require some degree of data-­crunching. The problems become hairier when the data bec­ome too many and the crunching too complex. All development macro-policies are, in essence, predictive: they need, and utilise, mountains of data in order to ensure relevancy sans cesse: and, yet, development policies always imp­lode, because Big Data—which policies depend upon for direction, formulation and execution—haven’t served them well in terms of predicting the future.

For, the issue with Big Data are small data; the issue with small data are micro-data. And the Achilles’ heel of micro-­data is Datenfälschung—data-falsification, whether deliberate or inadvertent. A smidgen of data-diddling can cause an avalanche of falling dominoes. What the world is facing today is—to use the word immortalised by Bret Swanson—an “exaflood” of both raw and doctored data. Meanwhile, Big Data has been getting a proper knocking as a vehicle for, well, anything.

Speaking before the US House of Representatives Sub­committee on Technology, Information Policy, Inter­govern­m­e­n­tal Relations, and Census (March 25, 2003), Jen Que Louie, president of Nautilus Systems Inc, had pointed out four data-mining fallacies:

  1. Data-mining tools set free will accomplish everything. (No such autopropelled data-mining tools exist.)
  2. The data-mining process works independent of human oversight. (It doesn’t. No software does­—yet.)
  3. Intuition can be built into data-mining software. (Not yet, and perhaps never.)
  4. Once set loose, data-mining pays its own way. (It doesn’t: It needs periodic infusions with upgrades.)

Add to these fallacies two others:

  1. Data-mining will automatically identify the problems with databases. (It won’t. It is not designed to.)
  2. Data-mining can clean up fuggled databases. (It cannot. Data-mining works with what it has on what it is given.)

In effect, nothing—no data, no science, no help from (putative) gods—quite helps predict the future. Humankind is stumbling forward more or less blind.

(Kajal Basu is a senior journalist.)

Show comments
US