models

Trump, the National Climate Assessment report, and fake news

The New York Times recently obtained a draft of the up-coming National Climate Assessment report. NYT is worried that the Trump administration will suppress the report. However, according to scientists who worked on the report, it has been available online since last January. (See Daily Caller story) You can download the 545-page 3rd draft report here, but don’t bother.

Besides the “fake news” story in the New York Times, we have a “fake news” story from the Associated Press printed by the Arizona Daily Star. Within that story is this sentence: Contradicting Trump’s claims that climate change is a “hoax,” the draft report representing the consensus of 13 federal agencies concludes that the evidence global warming is being driven by human activities is “unambiguous.”

Definition of unambiguous: “Admitting of no doubt or misunderstanding; having only one meaning or interpretation and leading to only one conclusion.”

Because of that statement and this one: “In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the prediction of a specific future climate state is not possible.” — Final chapter, Draft TAR 2000 (Third Assessment Report), IPCC, I downloaded the report to see just how unambiguous the evidence is. Here is what I found.

1) All their evidence consists of computer modeling. There is no physical evidence. That’s just like the previous National Climate Assessment report. They are, in essence, claiming that evidence of warming is evidence of the cause of warming.

2) On page 139, they discuss how they attribute causes:

Detection and attribution of climate change involves assessing the causes of observed changes in the climate system through systematic comparison of climate models and observations using various statistical methods. An attributable change refers to a change in which the relative contribution of causal factors has been evaluated along with an assignment of statistical confidence.

3) Beginning on page 144, they discuss “major uncertainties.” Oops, not so “unambiguous.”

The transient climate response (TCR) is defined as the global mean surface temperature change at the time of CO2 doubling in a 1%/year CO2 transient increase experiment. The TCR of the climate system to greenhouse gas increases remains uncertain, with ranges of 0.9° to 2.0°C (1.6° to 3.6°F) and 0.9° to 2.5°C (1.6° to 4.5°F) in two recent assessments. The climate system response to aerosol forcing (direct and indirect effects combined) remains highly uncertain, because although more of the relevant processes are being in included in models, confidence in these representations remains low. Therefore, there is considerable uncertainty in quantifying the attributable warming contributions of greenhouse gases and aerosols separately. There is uncertainty in the possible levels of internal climate variability, but current estimates likely  range of +/- 0.1°C, or 0.2°F, over 60 years) would have to be too low by more than a factor or two or three for the observed trend to be explainable by internal variability.

Does that sound like the evidence is unambiguous?

“There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.” – Mark Twain, Life on the Mississippi

UPDATE: The material above refers to the third draft of the report. The fifth draft has just become available. One analyst noticed “that the latest draft climate report, published in June, had seemingly left out a rather embarrassing table from the Executive Summary, one that had previously been written into the Third Draft, published last December.” What has been omitted is the fact “that the hottest temperatures, (averaged over the US), were not only much, much higher in the 1930s. They were also higher during the 1920s. Indeed there have been many other years with higher temperatures than most of the recent ones.” (Source)

I would not call it a hoax as does President Trump; I’d call it a scam. The National Climate Assessment itself is fake news; a political, rather than a scientific document.

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” – Upton Sinclair.

Additional reading:

Alan Carlin, a former senior EPA analyst, says computer models fail because: The bottom-up GCM was a bad approach from the start and should never have been paid for by the taxpayers. All that we have are computer models that were designed and then tuned to lead to the IPCC’s desired answers and have had a difficult time even doing that.

So not only are the results claiming that global temperatures are largely determined by atmospheric CO2 wrong, but the basic methodology is useless. Climate is a coupled, non-linear chaotic system, and the IPCC agrees that this is the case. It cannot be usefully modeled by using necessarily limited models which assume the opposite. Read more

Dr. Tim Ball: Uncovered: decades-old government report showing climate data was bad, unfit for purpose. In 1999, the National Academy of Sciences, the research arm of the National Research Council, released a study expressing concern about the accuracy of the data used in the debate over climate change. They said there are,

“Deficiencies in the accuracy, quality and continuity of the records,” that “place serious limitations on the confidence that can be placed in the research results.”

See also:

A Simple Question for Climate Alarmists – where is the physical evidence

Evidence that CO2 emissions do not intensify the greenhouse effect

My comments on the previous National Climate Assessment:

https://wryheat.wordpress.com/2014/11/15/national-climate-assessment-lacks-physical-evidence/

 

Advertisements

Climate models for the layman

The Global Warming Policy Foundation, a British think tank, has just published an excellent review of climate models, their problems and uncertainties, all of which show that they are inadequate for policy formulation. The paper is written by Dr. Judith Curry, the author of over 180 scientific papers on weather and climate. She recently retired from the Georgia Institute of Technology, where she held the positions of Professor and Chair of the School of Earth and Atmospheric Sciences. She is currently President of Climate Forecast Applications Network.

You can read the 30-page paper here:

http://www.thegwpf.org/content/uploads/2017/02/Curry-2017.pdf

Here is the executive summary:

There is considerable debate over the fidelity and utility of global climate models (GCMs). This debate occurs within the community of climate scientists, who disagree about the amount of weight to give to climate models relative to observational analyses. GCM outputs are also used by economists, regulatory agencies and policy makers, so GCMs have received considerable scrutiny from a broader community of scientists, engineers, software experts, and philosophers of science. This report attempts to describe the debate surrounding GCMs to an educated but nontechnical audience.

Key summary points

• GCMs have not been subject to the rigorous verification and validation that is the norm for engineering and regulatory science.

• There are valid concerns about a fundamental lack of predictability in the complex nonlinear climate system.

• There are numerous arguments supporting the conclusion that climate models are not fit for the purpose of identifying with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural.

• There is growing evidence that climate models predict too much warming from increased atmospheric carbon dioxide.

• The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

Climate models are useful tools for conducting scientific research to understand the climate system. However, the above points support the conclusion that current GCMs are not fit for the purpose of attributing the causes of 20th century warming or for predicting global or regional climate change on timescales of decades to centuries, with any high level of confidence. By extension, GCMs are not fit for the purpose of justifying political policies to fundamentally alter world social, economic and energy systems. It is this application of climate model results that fuels the vociferousness of the debate surrounding climate models.

Another climate model failure – there were no unusual extremes in drought or rainfall in the 20th century

It has been a tenet of the UN’s “authoritative consensus” that global warming will cause more extremes in weather such as dry areas becoming drier and wet areas becoming wetter. (Amazing, isn’t it how global warming can cause both drought and extreme rainfall?) Alas, a new study compared the rainfall/drought conditions of the 20th Century to the past 1,200 years in the Northern Hemisphere and found that extremes of rainfall and drought were much more prevalent in both warmer and cooler periods of the past. The study was based upon geologically preserved evidence of stream flow, lake levels, marine and lake sediments, tree rings, and historical records. Could it be that something other than CO2 and temperature is causing these conditions?

Here is a summary of the paper written by the authors (edited for readability):

Accurate modeling and prediction of the local to continental-scale hydroclimate response to global warming is essential given the strong impact of hydroclimate on ecosystem functioning, crop yields, water resources, and economic security. However, uncertainty in hydroclimate projections remains large, in part due to the short length of instrumental measurements available with which to assess climate models.

Here we present a spatial reconstruction of hydroclimate variability over the past twelve centuries across the Northern Hemisphere derived from a network of 196 at least millennium-long proxy records. We use this reconstruction to place recent hydrological changes and future precipitation scenarios in a long-term context of spatially resolved and temporally persistent hydroclimate patterns.

We find a larger percentage of land area with relatively wetter conditions in the ninth to eleventh and the twentieth centuries, whereas drier conditions are more widespread between the twelfth and nineteenth centuries. Our reconstruction reveals that prominent seesaw patterns of alternating moisture regimes observed in instrumental data across the Mediterranean, western USA, and China have operated consistently over the past twelve centuries.

Using an updated compilation of 128 temperature proxy records, we assess the relationship between the reconstructed centennial-scale Northern Hemisphere hydroclimate and temperature variability. Even though dry and wet conditions occurred over extensive areas under both warm and cold climate regimes, a statistically significant co-variability of hydroclimate and temperature is evident for particular regions. We compare the reconstructed hydroclimate anomalies with coupled atmosphere–ocean general circulation model simulations and find reasonable agreement during pre-industrial times. However, the intensification of the twentieth-century-mean hydroclimate anomalies in the simulations, as compared to previous centuries, is not supported by our new multi-proxy reconstruction. This finding suggests that much work remains before we can model hydroclimate variability accurately, and highlights the importance of using palaeoclimate data to place recent and predicted hydroclimate changes in a millennium-long context.

The study is:

Fredrik Charpentier Ljungqvist, Paul J. Krusic, Hanna S. Sundqvist, Eduardo Zorita, Gudrun Brattström, David Frank. Northern Hemisphere hydroclimate variability over the past twelve centuries. Nature, 2016; 532 (7597): 94 DOI: 10.1038/nature17418

These results should not be a surprise to those who deal with real data rather than output of computer models.

For instance, in the paper: Verosub, K. 2015. Don’t worry about climate change; California’s natural climate variability will probably “get us” first. Quaternary International 387: 148, the author reports that “at least once and probably several times in the last few thousand years, there have been droughts severe enough to drop the level of Lake Tahoe by several tens of meters, which allowed Douglas fir trees to grow to maturity on exposed lake beds.” Furthermore, other data indicate episodes of extreme flooding, such as the water year of 1861-1862 that brought extensive rainfall from Oregon down through southern California and “given the historic periodicity of these events, there would be no way to prove that they weren’t natural. (source)

By the way: The frequency of 90 degree days and 100 degree days has plummeted across most of the US since the 1930’s, see graphs.

Percent USHCN days over 90FPercent USHCN state over 100F

 

See also: NOAA can’t find link between global warming and extreme weather

Failure of climate models shows that carbon dioxide does not drive global temperature

More evidence that climate models are wrong

“It doesn’t matter how beautiful your theory is; it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” – Richard Feynmann

“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”– Albert Einstein

Dr. Roy Spencer and Dr. John Christy have published a graph comparing the predictions of 73 climate models versus the observations of radiosondes and satellites for tropical mid-troposphere global temperatures.  On the graph, the “spaghetti” are the model predictions and the heavy black line is the average of the models.  Actual observed temperature measurements (boxes and circles) from four balloon-borne radiosonde data sets and two satellite data sets are show to be lower than even the lowest model prediction since 1998 and lower than the average model prediction since 1979.

CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1

It is obvious from the graph that model predictions diverge markedly from reality.  Why?  The models are programed with a false assumption, namely, that carbon dioxide is a major driver of global temperature.  That hypothesis further assumes that as carbon dioxide warms the atmosphere, more water will evaporate (water vapor is a much stronger greenhouse gas), thereby producing “an enhanced greenhouse effect” i.e., a strong positive feedback.

It appears however, that the feedback is very small and perhaps negative. The models are wrong, probably because water vapor has a net negative feedback: clouds reflect sunlight and water vapor removes heat by convection. NASA says that even carbon dioxide can act as a coolant at the top of the atmosphere, see here.

We see also that although atmospheric carbon dioxide has been rising, global relative humidity and specific humidity have been decreasing according to data from the NOAA Earth System Research Laboratory(see graphs below).  That, too, contradicts the modeling assumption. Relative humidity is the percentage of water vapor in the air relative to the maximum possible water content at a specific temperature.  Specific humidity is the ratio of water vapor to dry air in a particular mass. See here for a more detailed explanation.

Global-humidity

 

Specific-humidity

The models are wrong because of wrong assumptions.  Unfortunately, much government policy and spending billions of dollars are based on these faulty assumptions.

Temperature and humidity are not the only failures of climate models.  See: As Floods Hit Eastern Germany, Recent Potsdam Climate Institute Model Warned Of Summertime “Water Shortages”!

A new paper in Science, the journal of the American Association for the Advancement of Science says that models fail because they do not provide “an adequate description of basic processes like cloud formation, moist convection, and [air] mixing.”

See also:

A Basic Error in Climate Models

Failure of the Anthropogenic Global Warming Hypothesis

The Case Against the IPCC and Proponents of Dangerous Anthropological Global Warming

A Basic Error in Climate Models

Earlier this week the Obama administration put out a major report by The U.S. Global Change Research Program (USGCRP) which predicts dire consequences if we don’t curb carbon dioxide emissions. The report is timed to influence major bills in the House and Senate, and it was the object of many media headlines, including a gloom-and-doom story by Tom Beal in the Arizona Daily Star today.

However, the report is pure junk science because it cherry-picks data to conform to policy and ignores much of the evidence, even evidence published by USGCRP in 2006.

In science, a hypothesis is a working assumption that must be tested by observation and experiment, and changed according to new information. In the realm of climate change, the scientific method of objective observation has been too often replaced by an ideology devoid of objective inquiry, one that embraces only evidence which supports the hypothesis and ignores conflicting information.

 Climate models used by USGCRP and the UN’s Intergovernmental Panel on Climate Change (IPCC) make a basic assumption about carbon dioxide that is wrong. Their assumption is one of positive feedback.

 Their hypothesis goes like this: carbon dioxide warms the oceans, which, in turn, causes water vapor to enter the atmosphere. Water vapor is a much stronger and more abundant “greenhouse” gas than is carbon dioxide, so water vapor enhances the warming effect of carbon dioxide – a positive feedback.

 The USGCRP and IPCC greenhouse models hold that the tropics should provide the most sensitive location for validation of the models. According to the models, temperature trends (rate of warming, not absolute temperature) should increase by 200-300% with altitude, peaking at around 10 kilometers – a characteristic “fingerprint” for green house warming. This “fingerprint” should look like the graph below.

signature-gh-warming

This graph is from the report, U.S. Climate Change Science Program (CCSP) Synthesis and Assessment Product 1.1. 2006, Temperature Trends in the Lower Atmosphere. This graph was adopted by the IPCC and appeared in its most recent report in 2007. Note: CCSP is now known as The U.S. Global Change Research Program (USGCRP). This graph appears on page 21 of the new USGCRP report issued Tuesday (June 16, 2009): Global Climate Change Impacts in the U.S.

 The graph above shows the hypothesis. The graph below shows the reality. Measurements from balloon-bourne radiosondes and from satellites show no increasing temperature trend with altitude.

signature-actual

Real observations show that the model-predicted “fingerprint” of anthropogenic, greenhouse warming is absent in nature.

[Additional source: Douglass, D.H. et al. 2007, A comparison of tropical temperature trends with model predictions, International Journal of Climatology DOI:10.1002/joc.1651].

The “no greenhouse signature” graph above appeared in the 2006 CCSP report but is missing from the new USGCRP report, perhaps because it conflicts with current policy.

This result of hypothesis versus real observations means that the models are wrong and greenhouse gases are not responsible for any significant 20th Century warming.

The basic error was the assumption of positive feedback. In actuality, increased water vapor in the atmosphere produces clouds which reflect sunlight back into space and thus has a cooling effect. This negative feedback, reflection, is, according to actual measurements, much stronger than the hypothetical positive feedback.

The USGCRP and IPCC modelers put out many scary scenarios from the “what if” games they play on computers, but all the scenarios are fatally flawed because of the erroneous assumption.

This point is emphasized by Dr. John Christy, Professor of Atmospheric Science and Director of the Earth System Science Center at the University of Alabama in Huntsville (and IPCC Nobel laureate). “We have seen a rise in surface temperature, but whether or not that is due to CO2 is subject to debate. However, both satellite and radiosonde measurements show that rise in tropospheric temperature has been less than half of the surface temperature rise, not more as predicted.” “This is important,” says Christy, “because the quantity examined here, lower tropospheric temperature, is not a minor aspect of the climate system. This represents most of the bulk mass of the atmosphere, and hence the climate system. The inability of climate models to achieve consistency on this scale is a serious shortcoming and suggests projections from such models be viewed with great skepticism.” [Source: The 13 May 2003 Testimony of Dr. John Christy before the U.S. House of Representatives’ Committee on Resources.]

It seems that politics is driving government science. In a CCSP report published in June, 2008 (Weather and Climate Extremes in a Changing Climate) they concluded:

1. Over the long-term U.S. hurricane landfalls have been declining.

2. Nationwide there have been no long-term increases in drought.

3. Despite increases in some measures of precipitation, there have not been corresponding increases in peak streamflows (high flows above 90th percentile).

4. There have been no observed changes in the occurrence of tornadoes or thunderstorms.

5. There have been no long-term increases in strong East Coast winter storms (ECWS), called Nor’easters.

6. There are no long-term trends in either heat waves or cold spells, though there are trends within shorter time periods in the overall record.

End Post