Experiments – what is the real temperature?

There has been much controversy over “hottest” years or months lately, mostly citing differences of less than one degree Fahrenheit. In a sane world such arguments should be academic, but policy advocates conflate any inkling of apparent support for their position into dire predictions that demand immediate action.

What is the real temperature? The answer is fraught with great uncertainty because it depends on where you put the thermometers, which readings are counted, and how they are averaged.

Two experiments demonstrate the problem. The newest, still on-going, experiment is being conducted at Oak Ridge National Laboratory. They set out an array of five thermometers in a field adjacent to a building (see photo below). They found that night time temperatures become warmer as the building is approached. This is true whether the wind is blowing toward or away from the building. The researchers suggest this happens because of infra-red radiation from the building. So, which thermometer records the “true” temperature? (See more of the story here).

Oak-ridgeI’ve reported on the second experiment before (see here). NOAA, keepers of the official temperature record, maintains two sets of stations, the older U.S. Historical Climatology Network (USHCN) and the newer U.S. Climate Reference Network (USCRN). The newer stations are located away from warming urban influence. These stations record temperatures 0.5°C on average, up to almost 4.0°C (0.9°F to 7.2°F) lower than the older stations upon which the official record is based.

These experiments support a5-year studyby Anthony Watts which showed that with a combination of poor station siting and “adjustments” performed by NOAA, the official U.S. temperature record is warm-biased and does not reflect the true temperature.

Perceived trends in the surface temperature record are used by advocates to attribute a cause, such as carbon dioxide emissions, and to pretend there is a crisis. But, as these data show, the causes are complex and very uncertain, too uncertain to be the basis of major policy decisions. But much money depends on maintaining the mythical crisis, from subsidies for wind and solar projects to the indulgence of renting “clean” air through the trading of carbon credits.

One other question: What is the “right” temperature for this planet?

See also:

Cooking the books – was 2012 really the hottest ever in the US?

New study shows that 50% of warming claimed by IPCC is fake

NOAA temperature record “adjustments” could account for almost all “warming” since 1973

Advertisements

2 comments

  1. Fair enough. I get how figuring out what the temperature is a few feet above the surface of the Earth is relative to the thermometer’s placement near radiative structures. But wouldn’t you get the same problem putting the thermometer too close to large rock outcroppings? Or above bare dirt vs. grass? Wouldn’t lots of thermometers placed in all types of locations – urban, suburban, rural and wild – produce accurate mean and median temperatures for a region?

    But don’t climatologists also use methods for determining current temperatures similar to the ones paleoclimatologists use to determine prehistoric (or pre-thermometer) temperature? Such as O16 to O18 ratios.

    In other words, if we can’t tell the current temperature because the placement of thermometers creates skewed results, then how do we know what the temperature was before thermometers were invented? Seems to me if we can say there was a Medieval Warm Period, we should be able to say there was a late 20th/early 21st centuries Warm Period. Or not.

  2. Good questions. Placing a thermometer too close to a rock would be the same as putting it near a building because of radiant heat at night. NOAA has 5 classes of stations showing different estimated errors:

    Class 1 (CRN1)- Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19deg). Grass/low vegetation ground cover 3 degrees. This setting is thought to produce an error of less than 1̊C. This class contains 1.2% of all stations in USHCN.

    Class 2 (CRN2) – Same as Class 1 with the following differences. Surrounding Vegetation 5deg. Error less than or equal to 1̊C. 6.7% of USHCN stations

    Class 3 (CRN3) (error >=1C) – Same as Class 2, except no artificial heating sources within 10 meters. 21.5% of USHCN stations.

    Class 4 (CRN4) (error >= 2C) – Artificial heating sources = 5C) – Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.” 6.2% of USHCN stations.

    “Wouldn’t lots of thermometers placed in all types of locations – urban, suburban, rural and wild – produce accurate mean and median temperatures for a region?” No because artificial highs over cities would skew the regional results. . Suppose we have a thermometer in a wild place and a rural place that show essentially the same temperature, and a suburban thermometer is 2 degrees higher, but a city thermometer is 10 degrees higher. Would an average give a true picture of the temperature?.

    Temperature proxies such as isotopes, sediments etc. tend to give an average temperature for a certain place over long periods of time – years rather than days and these temperatures are approximations. But due to the different thermometer errors, that temperature, too, is an approximation of the “real” temperature. But what is the real temperature?

    You touch on another controversial subject: How do you calculate the average temperature of planet Earth at any given time, using thermometers? Would you use all the thermometers or just some of them. Would you average the daily averages, or the daily highs or lows, weekly, monthly? If on a given day the temperature is -50C in Antarctica and +50C in the Sahara Desert is the average for the planet zero? It is more complicated than you may think at first. Does the short-term average really have any significance?

Comments are closed.