Climate models for the layman

The Global Warming Policy Foundation, a British think tank, has just published an excellent review of climate models, their problems and uncertainties, all of which show that they are inadequate for policy formulation. The paper is written by Dr. Judith Curry, the author of over 180 scientific papers on weather and climate. She recently retired from the Georgia Institute of Technology, where she held the positions of Professor and Chair of the School of Earth and Atmospheric Sciences. She is currently President of Climate Forecast Applications Network.

You can read the 30-page paper here:

Here is the executive summary:

There is considerable debate over the fidelity and utility of global climate models (GCMs). This debate occurs within the community of climate scientists, who disagree about the amount of weight to give to climate models relative to observational analyses. GCM outputs are also used by economists, regulatory agencies and policy makers, so GCMs have received considerable scrutiny from a broader community of scientists, engineers, software experts, and philosophers of science. This report attempts to describe the debate surrounding GCMs to an educated but nontechnical audience.

Key summary points

• GCMs have not been subject to the rigorous verification and validation that is the norm for engineering and regulatory science.

• There are valid concerns about a fundamental lack of predictability in the complex nonlinear climate system.

• There are numerous arguments supporting the conclusion that climate models are not fit for the purpose of identifying with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural.

• There is growing evidence that climate models predict too much warming from increased atmospheric carbon dioxide.

• The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCMs are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. It is this application of climate model results that fuels the vociferousness of the debate surrounding climate models.

Another climate model failure – there were no unusual extremes in drought or rainfall in the 20th century

It has been a tenet of the UN’s “authoritative consensus” that global warming will cause more extremes in weather such as dry areas becoming drier and wet areas becoming wetter. (Amazing, isn’t it how global warming can cause both drought and extreme rainfall?) Alas, a new study compared the rainfall/drought conditions of the 20th Century to the past 1,200 years in the Northern Hemisphere and found that extremes of rainfall and drought were much more prevalent in both warmer and cooler periods of the past. The study was based upon geologically preserved evidence of stream flow, lake levels, marine and lake sediments, tree rings, and historical records. Could it be that something other than CO2 and temperature is causing these conditions?

Here is a summary of the paper written by the authors (edited for readability):

Accurate modeling and prediction of the local to continental-scale hydroclimate response to global warming is essential given the strong impact of hydroclimate on ecosystem functioning, crop yields, water resources, and economic security. However, uncertainty in hydroclimate projections remains large, in part due to the short length of instrumental measurements available with which to assess climate models.

Here we present a spatial reconstruction of hydroclimate variability over the past twelve centuries across the Northern Hemisphere derived from a network of 196 at least millennium-long proxy records. We use this reconstruction to place recent hydrological changes and future precipitation scenarios in a long-term context of spatially resolved and temporally persistent hydroclimate patterns.

We find a larger percentage of land area with relatively wetter conditions in the ninth to eleventh and the twentieth centuries, whereas drier conditions are more widespread between the twelfth and nineteenth centuries. Our reconstruction reveals that prominent seesaw patterns of alternating moisture regimes observed in instrumental data across the Mediterranean, western USA, and China have operated consistently over the past twelve centuries.

Using an updated compilation of 128 temperature proxy records, we assess the relationship between the reconstructed centennial-scale Northern Hemisphere hydroclimate and temperature variability. Even though dry and wet conditions occurred over extensive areas under both warm and cold climate regimes, a statistically significant co-variability of hydroclimate and temperature is evident for particular regions. We compare the reconstructed hydroclimate anomalies with coupled atmosphere–ocean general circulation model simulations and find reasonable agreement during pre-industrial times. However, the intensification of the twentieth-century-mean hydroclimate anomalies in the simulations, as compared to previous centuries, is not supported by our new multi-proxy reconstruction. This finding suggests that much work remains before we can model hydroclimate variability accurately, and highlights the importance of using palaeoclimate data to place recent and predicted hydroclimate changes in a millennium-long context.

The study is:

Fredrik Charpentier Ljungqvist, Paul J. Krusic, Hanna S. Sundqvist, Eduardo Zorita, Gudrun Brattström, David Frank. Northern Hemisphere hydroclimate variability over the past twelve centuries. Nature, 2016; 532 (7597): 94 DOI: 10.1038/nature17418

These results should not be a surprise to those who deal with real data rather than output of computer models.

For instance, in the paper: Verosub, K. 2015. Don’t worry about climate change; California’s natural climate variability will probably “get us” first. Quaternary International 387: 148, the author reports that “at least once and probably several times in the last few thousand years, there have been droughts severe enough to drop the level of Lake Tahoe by several tens of meters, which allowed Douglas fir trees to grow to maturity on exposed lake beds.” Furthermore, other data indicate episodes of extreme flooding, such as the water year of 1861-1862 that brought extensive rainfall from Oregon down through southern California and “given the historic periodicity of these events, there would be no way to prove that they weren’t natural. (source)

By the way: The frequency of 90 degree days and 100 degree days has plummeted across most of the US since the 1930’s, see graphs.

Percent USHCN days over 90FPercent USHCN state over 100F


See also: NOAA can’t find link between global warming and extreme weather

Failure of climate models shows that carbon dioxide does not drive global temperature

More evidence that climate models are wrong

“It doesn’t matter how beautiful your theory is; it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” – Richard Feynmann

“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”– Albert Einstein

Dr. Roy Spencer and Dr. John Christy have published a graph comparing the predictions of 73 climate models versus the observations of radiosondes and satellites for tropical mid-troposphere global temperatures.  On the graph, the “spaghetti” are the model predictions and the heavy black line is the average of the models.  Actual observed temperature measurements (boxes and circles) from four balloon-borne radiosonde data sets and two satellite data sets are show to be lower than even the lowest model prediction since 1998 and lower than the average model prediction since 1979.


It is obvious from the graph that model predictions diverge markedly from reality.  Why?  The models are programed with a false assumption, namely, that carbon dioxide is a major driver of global temperature.  That hypothesis further assumes that as carbon dioxide warms the atmosphere, more water will evaporate (water vapor is a much stronger greenhouse gas), thereby producing “an enhanced greenhouse effect” i.e., a strong positive feedback.

It appears however, that the feedback is very small and perhaps negative. The models are wrong, probably because water vapor has a net negative feedback: clouds reflect sunlight and water vapor removes heat by convection. NASA says that even carbon dioxide can act as a coolant at the top of the atmosphere, see here.

We see also that although atmospheric carbon dioxide has been rising, global relative humidity and specific humidity have been decreasing according to data from the NOAA Earth System Research Laboratory(see graphs below).  That, too, contradicts the modeling assumption. Relative humidity is the percentage of water vapor in the air relative to the maximum possible water content at a specific temperature.  Specific humidity is the ratio of water vapor to dry air in a particular mass. See here for a more detailed explanation.




The models are wrong because of wrong assumptions.  Unfortunately, much government policy and spending billions of dollars are based on these faulty assumptions.

Temperature and humidity are not the only failures of climate models.  See: As Floods Hit Eastern Germany, Recent Potsdam Climate Institute Model Warned Of Summertime “Water Shortages”!

A new paper in Science, the journal of the American Association for the Advancement of Science says that models fail because they do not provide “an adequate description of basic processes like cloud formation, moist convection, and [air] mixing.”

See also:

A Basic Error in Climate Models

Failure of the Anthropogenic Global Warming Hypothesis

The Case Against the IPCC and Proponents of Dangerous Anthropological Global Warming

A Basic Error in Climate Models

Earlier this week the Obama administration put out a major report by The U.S. Global Change Research Program (USGCRP) which predicts dire consequences if we don’t curb carbon dioxide emissions. The report is timed to influence major bills in the House and Senate, and it was the object of many media headlines, including a gloom-and-doom story by Tom Beal in the Arizona Daily Star today.

However, the report is pure junk science because it cherry-picks data to conform to policy and ignores much of the evidence, even evidence published by USGCRP in 2006.

In science, a hypothesis is a working assumption that must be tested by observation and experiment, and changed according to new information. In the realm of climate change, the scientific method of objective observation has been too often replaced by an ideology devoid of objective inquiry, one that embraces only evidence which supports the hypothesis and ignores conflicting information.

 Climate models used by USGCRP and the UN’s Intergovernmental Panel on Climate Change (IPCC) make a basic assumption about carbon dioxide that is wrong. Their assumption is one of positive feedback.

 Their hypothesis goes like this: carbon dioxide warms the oceans, which, in turn, causes water vapor to enter the atmosphere. Water vapor is a much stronger and more abundant “greenhouse” gas than is carbon dioxide, so water vapor enhances the warming effect of carbon dioxide – a positive feedback.

 The USGCRP and IPCC greenhouse models hold that the tropics should provide the most sensitive location for validation of the models. According to the models, temperature trends (rate of warming, not absolute temperature) should increase by 200-300% with altitude, peaking at around 10 kilometers – a characteristic “fingerprint” for green house warming. This “fingerprint” should look like the graph below.


This graph is from the report, U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 1.1. 2006, Temperature Trends in the Lower Atmosphere. This graph was adopted by the IPCC and appeared in its most recent report in 2007. Note: CCSP is now known as The U.S. Global Change Research Program (USGCRP). This graph appears on page 21 of the new USGCRP report issued Tuesday (June 16, 2009): Global Climate Change Impacts in the U.S.

 The graph above shows the hypothesis. The graph below shows the reality. Measurements from balloon-bourne radiosondes and from satellites show no increasing temperature trend with altitude.


Real observations show that the model-predicted “fingerprint” of anthropogenic, greenhouse warming is absent in nature.

[Additional source: Douglass, D.H. et al. 2007, A comparison of tropical temperature trends with model predictions, International Journal of Climatology DOI:10.1002/joc.1651].

The “no greenhouse signature” graph above appeared in the 2006 CCSP report but is missing from the new USGCRP report, perhaps because it conflicts with current policy.

This result of hypothesis versus real observations means that the models are wrong and greenhouse gases are not responsible for any significant 20th Century warming.

The basic error was the assumption of positive feedback. In actuality, increased water vapor in the atmosphere produces clouds which reflect sunlight back into space and thus has a cooling effect. This negative feedback, reflection, is, according to actual measurements, much stronger than the hypothetical positive feedback.

The USGCRP and IPCC modelers put out many scary scenarios from the “what if” games they play on computers, but all the scenarios are fatally flawed because of the erroneous assumption.

This point is emphasized by Dr. John Christy, Professor of Atmospheric Science and Director of the Earth System Science Center at the University of Alabama in Huntsville (and IPCC Nobel laureate). “We have seen a rise in surface temperature, but whether or not that is due to CO2 is subject to debate. However, both satellite and radiosonde measurements show that rise in tropospheric temperature has been less than half of the surface temperature rise, not more as predicted.” “This is important,” says Christy, “because the quantity examined here, lower tropospheric temperature, is not a minor aspect of the climate system. This represents most of the bulk mass of the atmosphere, and hence the climate system. The inability of climate models to achieve consistency on this scale is a serious shortcoming and suggests projections from such models be viewed with great skepticism.” [Source: The 13 May 2003 Testimony of Dr. John Christy before the U.S. House of Representatives’ Committee on Resources.]

It seems that politics is driving government science. In a CCSP report published in June, 2008 (Weather and Climate Extremes in a Changing Climate) they concluded:

1. Over the long-term U.S. hurricane landfalls have been declining.

2. Nationwide there have been no long-term increases in drought.

3. Despite increases in some measures of precipitation, there have not been corresponding increases in peak streamflows (high flows above 90th percentile).

4. There have been no observed changes in the occurrence of tornadoes or thunderstorms.

5. There have been no long-term increases in strong East Coast winter storms (ECWS), called Nor’easters.

6. There are no long-term trends in either heat waves or cold spells, though there are trends within shorter time periods in the overall record.

End Post