Sunday, June 17, 2012

15 June 2011 – Nonlinearity and the indispensability of data

Colleagues,

For some years now, I’ve been beaking off (and, from time to time, writing) about the unreliability of methodologies designed to enable analysis of “alternative futures” as a means of informing the policy and force development and planning processes.  I don’t object to the whole “speculative fiction” part of the enterprise; as an avid sci-fi fan who traces his fascination with “what-if” literature all the way back to Danny Dunn and the Automatic House, I love a good yarn, and the more convincingly-told, the better.  Heck, Asimov’s Foundation series is predicated on the idea that rigorous, methodologically valid science (“psychohistory”) will someday be able to predict the human future; and some of Robert Heinlein’s most famous narratives derive from his “future history” series of novels.

My problem with the present mania for FSA derives from a methodological concern that can be summed up in the question, “how do we get there from here?”  The answer is simple: you can either use some form of projection of existing trends, or you can guess.  I won’t bother addressing the latter, as there’s no methodology to critique.  In trend assessment, however, the methodologies boil down to two: projection of historical trends; or modelling, with varying degrees of rigour.  Both methodologies are crippled by the same flaw: the fact that non-linear trends are virtually impossible to project or model, because they tend to be interdependent (often with trends about which we may know little or nothing), and because they may be exquisitely sensitive to minute changes in initial conditions.  Edward Lorenz sketched out the first inklings of what we refer to today as “chaos theory” when he attempted to model weather patterns for purposes of prediction in the 1960s.  He noticed that infinitesimal changes in starting conditions for an experiment can quickly lead to vast divergences between sequential experimental runs.

Here’s an example.  One of the simplest equations that demonstrates instability in response to a slight change in initial conditions is the “logistic equation”, which in a mapping function can be expressed as:

Xn+1 = RXn(1-Xn)   [Note A]

Setting the value of R to 4 (any higher integer causes the equation to quickly produce near-infinite results), let’s look at two curves, both of which follow this equation through 100 iterations for a starting value of 0.300000:
Remember, there are two curves on the screen - a blue one, and a pink one.  We only see the pink one because both curves are identical - they follow the same equation from the same initial conditions.  However, if we alter the initial conditions for the two curves only very slightly, we see something quite different.

By increasing the starting value for the second curve by only 0.000001, or three ten-thousandths of a percent, the result is a line which, although it closely resembles the original line for the first dozen or so iterations, begins noticeably to diverge by the 14th or 15th iteration.  By the 22nd iteration, the curves are virtually opposed; the value for the first (blue) curve is 0.94, and for the second (pink) curve, 0.03.  Beyond this point, there is no noticeable similarity between the curves.  A 30% divergence between the curves shows up at iteration 19, and a divergence of more than 100% shows up at iteration 21.

And what happens if we increase the starting value by another three ten-thousandths of a percent?  Check it out:

Once again, the curves are indistinguishable for the first dozen or so iterations, but by the 15th, they are beginning to diverge, and thereafter their behaviour appears to be entirely independent and offers very different results.  For example, at the 19th iteration, the curves come very close to each other, with values of 0.97, 0.99, and 0.98; but only three iterations later, their values are far apart again, at 0.94, 0.03, and 0.67.

This is a demonstration of the sensitivity to initial conditions displayed by nonlinear phenomena.[Note B]  This was Lorenz’s key insight: that because weather systems are chaotic and nonlinear, long-term behaviour is exquisitely dependent upon minute variations in starting conditions, and thus attempts to predict the behaviour of weather systems will always be enormously less reliable as time goes on.  The nonlinearity of the system means that variations in starting conditions that are so small as to be undetectable can very quickly result in massive differences in outcome.  This is why weather forecasts are more or less reliable a day or two into the future, pretty spotty about a week out, and utterly worthless beyond two weeks.  No matter how good your understanding of the equations, or how minutely you are able to measure initial conditions, predictive accuracy rapidly goes out the window because the system is nonlinear.  

To further illustrate the problem, imagine we were able to achieve a one hundred-fold increase in our ability to accurately measure initial conditions; i.e., the difference between the starting values for two curves is 0.0000001, or in the case of the next chart, three millionths of one percent.

Once again, the two curves are very similar through the first few iterations - in this case, up to about iteration #17.  The first serious divergence (>50% of the starting value) shows up at iteration 25.  By iteration 27, the divergence exceeds 100% of the starting value, and by iteration 29, the divergence is close to 300%.  Comparing this to the first divergence chart, we find that when the starting conditions differed by only three ten-thousandths of a percent, a 100% divergence first showed up at iteration 20.  This means that a 100-fold improvement in one’s ability to measure initial conditions yields only 7 more iterations (from 20 to 27) before divergence reaches 100% and predictive confidence consequently goes right out the window.  What this means is that even the most enormous and technologically unattainable improvements in our ability to measure the initial conditions of a nonlinear system will yield only tiny incremental increases in predictive utility, because the system by its very nature is nonlinear.

The bottom line is that anyone who claims predictive accuracy in long-term modelling of the behaviour of nonlinear systems isn’t doing science; they’re doing something else.

Why does this matter?  Because it demonstrates a crippling weakness inherent in attempts to model the future behaviour of nonlinear systems, and it highlights the far greater predictive validity of methodologies that employ historical trend projection based on observed data.  I mention this because yesterday, the American Astronomical Society (AAS) dropped a fairly significant bombshell when it stated that all major empirical measures of what’s going on with the Sun suggest that we are in for a major and prolonged decline in solar activity leading to cooling likely to last for decades, with consequent impacts on Earth’s environment.[Note C]

The Sun exhibits a number of important cycles; the one most folks are familiar with is the 11-year sunspot cycle.  We’re currently in the midst of Cycle 24 (they’re numbered from when we began assigning them numbers back in the 17th Century), and the Sun has been working up towards the peak of that cycle, exhibiting (for example) increasing if somewhat erratic sunspot activity. 

However, this is only one of the Sun’s major cycles; the overall trend in sunspot activity and irradiance is beginning to decline.  First, the average magnetic field strength of sunspots is declining, and when it drops below 1500 Gauss, sunspots will no longer be visible on the Sun’s surface.  Based on observed data, that’s likely to happen in the next decade or so.
Second, the polar jetstreams associated with sunspot formation in a given cycle begin to form in the preceding cycle, and at present, they are absent.


The absence of polar jetstreams points to a very weak, possibly missing, Cycle 25.  The combination of all three phenomena - missing jetstreams, declining magnetic field strength in sunspots, and lower activity near the solar poles suggests that even as the Sun is becoming more active as a consequence of peaking in Cycle 24, it is almost certainly heading for a very weak Cycle 25, and very likely for a long-term period of dormancy.  Richard Altrock, the manager of the US Air Force’s Coronal Research Programme, put it this way:

“Cycle 24 started out late and slow and may not be strong enough to create a rush to the poles, indicating we’ll see a very weak solar maximum in 2013, if at all. If the rush to the poles fails to complete, this creates a tremendous dilemma for the theorists, as it would mean that Cycle 23’s magnetic field will not completely disappear from the polar regions (the rush to the poles accomplishes this feat). No one knows what the Sun will do in that case.”

All three of these lines of research to point to the familiar sunspot cycle shutting down for a while.

Dr. Frank Hill of the US National Solar Observatory had this to say about the observed results and analysis:

“This is highly unusual and unexpected,” Dr. Frank Hill, associate director of the NSO’s Solar Synoptic Network, said of the results. “But the fact that three completely different views of the Sun point in the same direction is a powerful indicator that the sunspot cycle may be going into hibernation.”

[...]

“If we are right,” Hill concluded, “this could be the last solar maximum we’ll see for a few decades. That would affect everything from space exploration to Earth’s climate.”

Of course, this is only a problem if you happen to believe that “solar forcings” spawned by that big ball of fusion in the sky have anything to do with Earth’s climate.  The IPCC has dismissed solar forcing effects as a potential driver, calling the impact of the Sun on Earth’s climate “very small compared to the differences in radiative forcing estimated to have resulted from human activities.”[Note D]

If the IPCC says the Sun doesn’t matter, then should we really be concerned?  Well, the AAS results aren’t the only bit of research to come out recently.  Harking back to my first point - that it’s really hard to figure out where you’re going if you have no idea where you’ve been - I was fascinated by a new paper out of Brown University last week, which, using ice core data, attributes the extinction of the medieval Viking settlements in Greenland to “rapid climate change” - i.e., cooling.  It’s not the first time that this has been posited, but it IS the first time that somebody has supported the argument with local empirical data.

According to this paper [Note E], beginning around 1100 AD, temperatures in the lakes around the Viking settlements dropped by 4 degrees centigrade over a period of about 80 years (observe that Al Gore and the IPCC are sounding the tocsin because the average global temperature has risen about 0.6 degrees centigrade over the past 100 years):

“You have an interval when the summers are long and balmy and you build up the size of your farm, and then suddenly year after year, you go into this cooling trend, and the summers are getting shorter and colder and you can’t make as much hay. You can imagine how that particular lifestyle may not be able to make it,” D’Andrea said.

Archaeological and written records show the Western Settlement persisted until sometime around the mid-1300s. The Eastern Settlement is believed to have vanished in the first two decades of the 1400s.

What was going on?  Well, the Medieval Warm Period - which enabled the Viking settlements to be established in the first place in the 10th Century, and which the more ardent proponents of the anthropogenic global warming thesis have tried so valiantly to make disappear (see the controversy over Michael Mann’s infamous “Hockey Stick” graph, which purported to show that global temperatures were unchanging for the past 1000 years until humans started burning lots of oil in the 1900s) - was followed by the Little Ice Age.  While all civilizations suffer from cooling, it’s important to note that the ones that went extinct were not the civilizations with high technology or excess capacity or forgiving climates, but rather those that existed on the outskirts of more clement regions, surviving on the narrowest margins of crop and livestock viability.  A 10% reduction in the length of a growing season might not be important in a tropical country, but as we’ve seen in that past few weeks, with millions of acres in the US Midwest going unplanted due to unseasonal cold and rains, it can make the difference between subsistence and catastrophe.  Look to see food prices skyrocketing this fall, by the way - and this time, it won’t just be because the US government has mandated ethanol inclusions in motor gasoline.  It’ll be because the cold, wet spring prevented grain from being planted in the optimal growing season in the countries that traditionally produce a surplus of grains.  When you live where Canadians and Americans live, or where the Greenland Vikings lived, “global warming” is not the problem, especially when it isn’t happening.  Global cooling is.

And what causes cooling?  Well, climate, as noted above, is a complex interdependent system, so it’s probably the result of a lot of interrelated things.  But the only phenomenon that the long-term cooling trends have been demonstrated to correlate with is lowered solar activity (and, at much longer time scales, the Earth’s axial tilt, and the passage of the Solar System through the galactic spiral arms - but that’s a topic for another day).  The Little Ice Age correlated with a stark decline in solar activity that culminated with the Maunder Minimum, when the Sun’s visible activity all but ceased for more than half a century.  During the Medieval Warm Period, when the Vikings colonized Greenland, grapes grew in England; during the Maunder Minimum, when the Greenland settlements died out, Britons regularly walked across the frozen Thames.  History is a great teacher, especially when it correlates with what we know about the Sun’s behaviour over the past four centuries.

That’s NASA data, by the way - and the reason it doesn’t go beyond 2007 is because sunspot numbers show another massive plunge that’s something of an “inconvenient truth” for those insistent upon fingering human-produced carbon dioxide as the principal culprit in climate change.  You see, carbon dioxide concentrations have been climbing spectacularly, but both temperatures and solar activity have been declining.  Correlation may not be causation, but it certainly suggests where you ought to be looking for causal relationships. 

If the sunspot cycle, as historical trends and observed data suggest, is indeed going “into hibernation” for a prolonged period, then the implications for mankind are significant.  If you want to see “negative impacts of climate change”, you won’t have to wait for the postulated four-degree-per-century warming that the IPCC says will happen, and isn’t; you simply have to wait for the cooling that measured solar data suggest is on the way, and that temperature measurements tell us has been going on for more than a decade:

Comparing the IPCC’s low, ”best”, and high predictions for future temperatures to actual measured temperatures gives us a pretty good idea of how useful models really are when the modellers don’t understand the system they’re attempting to model. 

Or to put it a different way, as Steve Goddard likes to:


This shows James Hansen's temperature predictions based on CO2 concentration in the atmosphere.  The solid black line shows "business as usual" CO2 emissions (actually they've been higher than that); while the Scenario C line basically shows what would happen if human beings had disappeared from the planet in the year 2000.  The red and blue lines track measured temperatures.  In other words, temperatures are lower than if humans had drastically curtailed CO2 emissions 12 years ago, which we know did not happen.

Empirical data demonstrate that there is no correlation, and therefore no possibility of a causal linkage, between human-produced CO2 and global temperatures.  There is simply no data to support the principle contention of the anthropogenic global warming theorists.

It comes down to a choice between basing your analysis either on something concrete, like measured data, or on something artificial, like model outputs.  In my opinion, our responsibility as strategic analysts is to ground our guesses about the future in empirical data and the parsimonious, conservative projection of demonstrable historical trends, always acknowledging that nonlinear systems - whether the logistic equation, climate, or human society - are exquisitely sensitive to minor variations in initial conditions, and that predictive confidence therefore drops off rapidly the further out we look.  Sure, using actual data and acknowledging the constraints of this kind of methodology is a lot more challenging than just citing the IPCC, the Stern Report, or second- and third-hand analytical derivations penned by organizations of dubious scientific credentials and objectivity (it’s also harder than simply making stuff up, which is both easy AND fun); but working from hard data has the virtue of being methodologically rigorous, and the added bonus that you might possibly be proven right by design rather than simply by luck. 

Because that last chart shows what happens when people rely on models of nonlinear systems to try and predict the future.  It doesn't work.

Which sort of brings us to the last principle of trend analysis: when the data don’t confirm our predictions, which do we change? The data?  Or the way we make predictions?  How you answer that question determines whether you’re doing science...or something else.

Cheers,

B) There’s an excellent discussion, using some of the same examples, of how chaos theory invalidates the AGW hypothesis, here [http://wattsupwiththat.com/2011/06/13/the-chaos-theoretic-argument-that-undermines-climate-change-modelling/#more-41556].

C) [http://wattsupwiththat.com/2011/06/14/all-three-of-these-lines-of-research-to-point-to-the-familiar-sunspot-cycle-shutting-down-for-a-while/]

D) IPCC 4th Assessment Report, Report of Working Group 1, Chapter 2, p. 137.

E) [http://news.brown.edu/pressreleases/2011/05/vikings]