Sunday, 10 May 2015

Climate change: the science is settled

The slogan "the science is settled" is mainly a rhetorical tool of the mitigation sceptics: A nice strawman to shoot down. When someone does say "the science is settled", what they are telling is "I do not want to talk to you about this".

Climatology is a mature science by now:
The basic results have not changed in decades,
the basics are based on natural science, where clear answers are possible,
a lot of scientists are involved,
climatology is well networked into the other sciences and
colleagues from other disciplines regularly contribute.

In such a situation, I find it perfectly normal for a citizen or politician to respond to outlandish claims of mitigation sceptics with: "Listen chap, you and I are both not qualified to discus such details. For me it is good enough that almost all scientists agree that there is a problem. If you really have a problem with some detail of the science, go write a scientific paper about it, but do not bother me with it." Or for short: "The science is settled".

You will have to search long and hard to find a scientist saying "the science is settled". Mostly because that is not how we scientists think: we are working on trying to improve our understanding and focus on the interesting parts that are not well understood. But when someone claims to have refuted the greenhouse effect, I feel perfectly entitled to say "the science is settled".

Some people even claim that greenhouse gasses cool the atmosphere. Sorry my life is too short to waste on such people. Some people even claim that the increases in CO2 are not man made. Sorry my life is too short to waste on such people.

Not only is there an overwhelming consensus on these topics, these are also topics that are pure natural science and they deal with the present or recent past. For such questions, an overwhelming consensus signals that the evidence is clear. I do not overestimate myself and think I can just barge in and explain the local experts what they are doing wrong. If someone else wants to do so, fine, they can write a scientific article about it. If they do not have the skills to write a scientific article, it is a legitimate question how it is possible that they are so sure that the science is wrong.

When someone asks me about the removal of non-climatic changes (inhomogeneities) from climate data, I can naturally not just answer "just trust me". That is my field of expertise and people have a justified expectation that I am able to answer such questions.

There are limits to this expectation. Eric Worrall recently asked on this blog: "How do you know the climate didn't actually cool?". And also did not provide any arguments. That signals such a degree of extremism that it makes no sense to talk to someone like that. A similar question from a scientist would also be another matter, but Worrall comes from a group that is known for misinformation and deceit. A reasonable answer would thus be: "the science is settled". Because it was here at my blog and he is somewhat known in mitigation sceptical circles, I actually mentioned the various lines of independent evidence that the world is warming. Then he was no longer interested in the discussion.

"The science is settled" is especially well suited for politicians. And they seem to have picked up this strawman lately. Seepage? Mass media are important for politicians and a short snappy slogan makes it more likely that your message is transported. Politicians are unfortunately supposed to extrude confidence and thus prefer not to explicitly say that they are not qualified. Good scientists should know the humble limits of their expertise. They can signal that they are good at science by acknowledging that the "question" is not their field of expertise. Unfortunately, scientists are also humans and some age groups and some fields of study seem to have more problems with admitting any limits.

Politicians may also like it that the slogan is rather ambiguous. The greenhouse effect itself is well understood. That greenhouse gas concentrations are increasing due to human activities is well understood. That CO2 emissions from fossil fuels, deforestation and cement are the main reason also. That it will continue if we do not change our energy system is also clear. However, what the future will exactly bring, how much the temperature will increase exactly is uncertain, if only because no one can predict how much CO2 we will emit before action is taken.



Also which impacts will be how big is quite uncertain. That is actually what worries me most. We are taking the climate system, on which our civilisation depends, outside the range we understand. There will be many (unhappy) surprises. The uncertainty monster is not humanities friend.

Thus, dear reader, please do not ask me why CO2 is a greenhouse gas, why the greenhouse effect does not cool the atmosphere, or even less interesting mitigation sceptical questions from the list of Skeptical Science. The science is settled!! But feel free to ask any questions on homogenization methods, about non-climatic changes in the historical stations measurements or why I am sure the climate is not cooling.

[UPDATE. Just remember a memorable quote by Justin Haskins, a blogger and editor at Heartland, a pro smoking and fossil fuel think tank: “The real debate is not whether man is, in some way, contributing to climate change; it’s true that the science is settled on that point in favor of the alarmists.” If he can, anyone can use the slogan: the science is settled.]


Related reading

The US Republicans signal that the science is settled, that there is no need for further climate research and in doing so put the American public in harms way.

Climatology is a mature field

A collection of early citations on settled science

Thou shalt not commit logical fallacies

Thursday, 7 May 2015

Extending the temperature record of southeastern Australia

This is a guest post by Linden Ashcroft. She did her PhD studying non-climatic changes in the early instrumental period in Australia and now works at the homogenization power house, the Centre on Climate Change (C3) in Tarragona, Spain. She weekly blogs on homogenization and life in Spain.

This guest post was originally written for the Climanrecon blog. Climanrecon is currently looking at the non-climatic features of the Bureau of Meteorology’s raw historical temperature observations, which are freely available online. As Neville Nicholls recently discussed in The Conversation, the more the merrier!


Southeastern Australia is the most highly populated and agriculturally rich area in Australia. It’s home to our tallest trees, our highest mountains, our oldest pubs and most importantly, our longest series of instrumental weather observations. This makes southeastern Australia the most likely place to extend Australia’s instrumental climate record.

The official Australian Bureau of Meteorology was formed in 1908, bringing standard observing practices into effect across the country. Before this time, Australian weather station coverage was not as dense as today’s network, and there was no nationally-standard procedure for recording temperature.

The uncertain quality of the pre-1910 data and the lack of readily available information about observation techniques is why the current high-quality temperature dataset available for Australia does not begin until 1910. However, this does not mean that valuable observations were not taken in the 19th century, or that the spatial coverage of these data is too poor to be useful for studies of regional climate. Here, I explain what colleagues and I did to extend the temperature record of southeastern Australia.

When is the temperature not the temperature?

First of all, it’s important to know what ‘non-climatic’ influences can affect temperature observations. Some influences are fairly obvious: if you move a thermometer 20km down the road, change the time of day that the temperature is recorded, or replace the screen the thermometer is housed in, that will most likely cause changes in the data recorded at that station which are not caused by actual changes in the surrounding air.

Other non-climatic influences are more subtle. The slow growth of a tree next to a thermometer for example, or changes to the irrigation system in nearby paddocks can cause gradual changes to temperature observations that are not a reflection of the temperature in the wider area.

Adelaide_screens

Three thermometer screens used in Adelaide, South Australia, in 1890: a Stevenson screen (left), a thermometer shed (centre), and a Glaisher stand (right). Charles Todd famously recorded the temperature in each of these screens for around 40 years, giving us invaluable information about the effect of different screens on temperature observations. Image: Meteorological Observations Made at the Adelaide Observatory, Charles Todd (1907).

Finding and reducing the impacts of these non-climatic influences is an important part of any climate change research. A multitude of statistical methods have been developed over the last 30 or so years to do this, ranging from the beautifully simple to the mind-bogglingly complex. Most of these methods rely on reference series, or a version of the truth with which you can compare data from the station you’re interested in. Reference series are often made using data from neighbouring weather stations that experience a similar climate. Of course this is much harder if you don’t have many neighbouring stations, or if a change in observation method happens at all stations at once! This is another reason why Australia’s national temperature record does not extend before 1910.

As well as these statistical tests, it’s also really useful to have information, or metadata, about the maintenance and changes that occur at a weather station. Metadata help you understand why a non-climatic influence might occur in the climate data, and how big it could be. In reality though, metadata can be hard to find.

Extending southeastern Australia’s temperature record

My work with the South Eastern Australian Recent Climate History project (SEARCH) set out to explore the quality and availability of climate data for Australia before 1908. Our aim was not to look at extreme events, or the exact temperature value at a particular location in a particular month, but to see how the average temperature across southeastern Australia varied over years and decades. This is important to note because if we were studying extremely hot months for example, or only the climate of Wagga, we might have used different methods.

The first step was looking for long-term temperature stations in southeastern Australia. Although the Bureau of Meteorology only started in 1908, a number of weather stations were set up in capital cities and key country towns from the late 1850s, thanks to the dedication of Australia’s Government Astronomers and Meteorologists

My colleagues and I also uncovered some sources of pre-1860 instrumental climate data for southeastern Australia that we painstakingly digitised and prepared for analysis. You can read more about that here.

Next, we spent a lot of time collecting information from the Bureau of Meteorology and previous studies about possible changes in station locations and other things that might affect the quality of the observational data. For some stations we found a lot of photos and details about changes to the area around a weather station. For others, particularly in the pre-1860 period, we didn’t find much at all.

After getting all this information and removing some outlying months, we tried to identify the non-climatic features of the temperature observations, and remove their influence. For the pre-1860 data this was particularly difficult, because we did not have a lot of metadata and there were no nearby stations to use as a ‘truth’. In the end we looked for large changes in the data that were supported by metadata and by the behaviour of other variables.

For example, a drop in temperature in the 1840s at some stations occurred at the same time as an increase in rainfall, suggesting that the temperature change was real, not due to some non-climatic influence. We also noted any other issues with the data, such as a bias towards rounding temperature values to the nearest even number (a common issue in early observations).

For the post-1860 data, we applied a 2-stage process of removing the non-climatic factors, using the statistical RHtest developed by members of the international Expert Team on Climate Change Detection and Indices (ETCCDI). The first stage looked for absolute non-climatic features: big jumps in the data that were clearly instrument problems or station changes, and that were supported by the metadata.

The second stage used reference series made from the average of data from up to five highly correlated neighbouring stations. Both stages involved carefully comparing the statistical results with the metadata we had, and making a decision on whether or not a non-climatic feature was present. You can read more about the methods here.

Newcastle_Tmax

An example of the non-climatic features, or jumps, found in Newcastle maximum temperature data from 1860–1950 using the RHtest method. The supporting metadata are also shown. Image supplied by author.

So what did we find?

Our results for the 1860–1950 period found 185 non-climatic features in the maximum temperature data, and 190 in the minimum temperature data over the full network of 38 stations. Over 50% of these non-climatic features identified were supported by metadata, and most of them occurred from 1880 to 1900, when the thermometer shelters that were used in Australia were changed from Glaisher stands to Stevenson screens.

You can see in the figures below that removing the non-climatic features had a big impact on the variance of the observations, making the spread of temperatures across the region much more similar to the post-1910 period, when observations are more reliable. Our results also agreed very well with southeastern Australia data from the Bureau of Meteorology’s monthly and daily temperature datasets that have been tested for non-climatic features using different methods.

This good agreement allowed us to combine the area-average of my data with that of the Bureau’s best temperature dataset (ACORN-SAT), building a monthly temperature record for southeastern Australia from 1860 to the present. Combining the two series showed that the current warming trends in Australia are the strongest and most significant since at least 1860.

SEA_Tmin_orig_homog

Original_homogenised_data

Area-averaged SEA annual anomalies (°C, relative to the 1910–1950 base period) of original data from the historical sources (1860–1950, dashed line), adjusted data from historical sources (1860–1950, solid red line) and data from ACORN-SAT, the Bureau of Meteorology’s daily temperature data over SEA (1910–1950, solid blue line) for maximum and minimum temperature over 1860–1950. The maximum and minimum anomalies for each year across the network are also plotted for the original historical data (grey shading with black outline), adjusted historical data (pink shading with red outline) and ACORN-SAT data (blue dotted lines). Adapted from Ashcroft et al. 2012.

The 1788–1860 data had some non-climatic influences as well, but these were much harder to untangle from the true climate signal because there is not as much ‘truth’ to compare to. We identified five clear non-climatic features of the pre-1860 temperature data from three stations, and found that the year-to-year temperature changes of the adjusted data matched up pretty well with rainfall observations, and newspaper reports at the time. The quality and distribution of the pre-1860 data made it impossible to accurately combine the pre- and post-1860 observations, but hopefully in the future we will uncover more observations and can look more closely at the 1788–1860 period.

Our work is one of the first projects to tackle Australia’s pre-1910 temperature data, shedding more light on Australia’s climatic past. This research also makes use of the hard work that was done by some of the country’s scientific pioneers, which is very rewarding to me as a scientist.

But science is always moving forwards, and our work is just one link in the chain. As more statistical methods are developed and additional data uncovered, I’m sure that we, professional and citizen scientists alike, will be able to build on this work in the future.





More information and references:

Ashcroft, L., Gergis, J. and Karoly, D.J., 2014. A historical climate dataset for southeastern Australia. Geosciences Data Journal, 1(2): 158–178, DOI: 10.1002/gdj3.19 (html and PDF). You can also access the 1788–1860 data at https://zenodo.org/record/7598.

Ashcroft, L., Karoly, D.J. and Gergis, J., 2012. Temperature variations of southeastern Australia, 1860–2011. Australian Meteorological and Oceanographic Journal 62: 227–245. Download here (PDF).

Sunday, 3 May 2015

Gavin Schmidt, welcome to Hamburg



Bad Astronomer is mad:
A passel of anti-science global warming denying GOP [USA Republican party] representatives have put together a funding authorization bill for NASA that at best cuts more than $300 million from the agency’s current Earth science budget. At worst? More than $500 million. ... The authorization bill passed along party lines (19 Republicans to 15 Democrats).
Bad Astronomer also reported that last year the Republicans shifted the climate research funding for NOAA towards weather prediction.

John Timmer of Ars Technica adds:
The bill comes a week after the same committee reauthorized the America COMPETES act, which includes funding for the National Science Foundation and Department of Energy. As at NASA, geoscience funding takes a hit, down 12 percent at the NSF, with environmental research from the DOE taking a 10 percent hit.

There seems to be a pattern.

One wonders if they know what they are doing.

I would personally interpret a reduction in the budget for climate research as claiming: the science is settled. If these Republicans were sceptical about climate science, they would want to fund research to find the reason for the misunderstanding. The consensus of 97% of climate scientists, myself included, that global warming is happening, is caused by us and will continue if we do not do something, will not go away by itself. That will require research, arguments, evidence. In this light it would make sense when Democrats would shift science funding from climate research to climate solution, but they may realise that there are more considerations.

It also goes against the motto of the mitigation sceptics that we should help ourselves and adapt to climate change, rather than to reduce greenhouse gas emissions. Because then we should know what to adapt to.

When it comes to the relationship between greenhouse gasses and global mean temperature, our understanding of climate change is pretty solid. Not perfect, science never is, but pretty solid. For adaptation, however, you need local information; global means are not enough. That is a lot harder, that requires that all the changes in the circulation of the oceans and the atmosphere are rightly predicted. It likely depends on aerosol concentrations (small atmospheric particles), on changes in the vegetation, water tables, sea ice.

Many impacts of climate change will be due to severe and extreme weather. Sea level rise, for example, endangers low lying regions, but the sea dikes will breach on a stormy day, thus you also need to know how storms change. Thus for adaptation you do not only need to know the annual or decadal average temperature, you need to know the changes in atmospheric events that happen on short time scales. From minutes to days for severe weather, from weeks to months for heat waves and droughts. This is hard for the same reasons why predicting local changes is hard.

It is not enough to know what happens to temperature, while especially precipitation and storms are very hard to predict accurately. They are, however, very important for agriculture, infrastructure, flood prevention, dikes, and landslides. When it comes to infrastructure or long term private investments, we would need to know such changes decades in advance. Alternatively, you could "adapt" to any possible change, but that is very expensive.

This is the kind of detail we need to prepare our communities to adapt to climate change. This is hard and very much ongoing research. If you are taking the bet that adaptation is enough, it does not seem wise to leave the American public unprepared.

Earth observation is also much more than climate. The same satellites, the same understanding of these measurements and deriving information products from them are used in meteorology. One of the main reason why casualties due to severe weather are decreasing is because of good weather predictions, we see the bad weather coming and can respond in time. Good weather predictions start with a good description of the state of the atmosphere at the start of the weather prediction (called [[data assimilation]]). More computer power, better models, better assimilation methods and detailed global Earth observations are responsible for the improved modern weather predictions. My guess would be that the better observations are easily responsible for half of the improvements. While I work on ground-based measurements, I must admit that for accurate weather predictions beyond one or two days the global overview of satellites is essential.

Other applications of Earth observations:
- assisting wildfire managers in wildfire recovery
- supplying farmers with knowledge about when to grow which crops and where
- drought/famine prediction
- the effects of deforestation and natural disasters (such as landslides, earthquakes, hurricanes, etc.) on local communities and surrounding ecosystems
Earth observation is also important to organize the rescue work after catastrophes. Think Hurricane Katrina.



The funding reductions also give the impression that climatologists are punished for their politically inconvenient message. Maybe these Republicans think that they can influence the state of the science by beating scientists in submission. This will not work. Science is not organised like a think tank, which are there to write any bunk that the big boss wants written.

Science is a free market of ideas. Like the free market uses distributed information on how to efficiently organize an economy, science is highly distributed and cannot be controlled from the top. Every researcher is a small entrepreneur, trying to search for problems that are interesting and solvable. Science is organised in small groups. If your group does not function, you'd better get out before your reputation and publication record suffer. Multiple such groups are at one university or research institute. In one country you will find many universities and institutes. All these groups in many countries are all competing and collaborating with each other. Competing for the best ideas, because it is fun and get more possibilities to do research. The currency is reputation.

Your articles are peer reviewed by several anonymous colleagues selected by a journal editor, research proposals are reviewed by several senior anonymous colleagues selected by the funding agency, the university groups and institutes are regularly reviewed by groups of senior scientists. You are competing to be able to collaborate with better groups. This web of competitive and collaborative relations is designed to get the best ideas to float up and to make it hard to apply pressure top down. Add to this researchers who are fiercely independent, intrinsically motivated and do science because they want to challenge themselves, understand the world and measure themselves with the best. At least the majority of people starting with research is intrinsically motivated. The climate "debate" gives the impression that for some the joy of science fades with age.

Even if it would be better for American scientists to shut up and spread Republican propaganda, no one could enforce that, while there are strong enforcement mechanisms for the quality of science. I am sure that most American scientists that would be pressured by the Republicans would still stick to the truth. You get into science because you want to understand reality. Scientists accept low pay and bad labour conditions to do so. If they are no longer allowed to tell the truth, I would expect most scientists to move to another group, to another country or simply to stop.



Punishing scientists for their inconvenient message also send a bad signal to the world. A country with a government meddling in the results of scientific research is not attractive. A large part of the scientists in America come from abroad. New high potentials may now think twice before they go to America.

Researchers may go to my birth country (The Netherlands) or home country (Germany)instead. They have freedom of science and research in their constitutions. In both countries you only need English to do your work and nowadays you can also get by in daily life with English (although I would still advice to learn the local language for better social integration). For Germany I know that we are always looking for good scientists. Too little students start studying meteorology to fill the vacancies. They even took me as a physicist because they could not get any better.

In the 17th century, when in most of Europe's rulers did not tolerate deviating thoughts, The Netherlands was a haven of tolerance and experienced a golden century. This little country attracted an enormous influx of the best scholars, scientists and artists from all over Europe, introduced many free market innovations, became a world power. Before and during the Second World War many Jewish and German scientists migrated to America because of the repression in Europe. This has kick started American science. The repression of scientific freedom has real economic and cultural consequences.

One wonders if they know what they are doing. The GOP representative, that is. They almost unanimously have trouble accepting that we are responsible to (almost) all the warming seen in the last century. The normal Republicans (except for the Tea Party) fit into the American mainstream when it comes to accepting that climate change is real. The very vocal mitigation sceptics on the net that give America a bad name abroad only represent a few percent of the population. One wonders when these normal people tell their politicians to get their act together.

In the realm of climate research, my guess would be that Europe is already a little stronger than the USA. The aggressive mitigation sceptics to do not make America more attractive. The FOIA harassment in response to inconvenient science does not make America more attractive. When the political radicals are determined to hurt American interests and pass this bill, I would like to invite Gavin Schmidt to Germany. In am sure the Max Planck Institute in Hamburg would be interested. The Max Planck Foundation was specially founded to attract the best researchers from all over the world by providing them with a lot of freedom of research. After all, top researchers know best what is important for science. Gavin welcome to Hamburg, your [[ICE]] is waiting.





Related reading

Phil Plait (Bad Astronomy blog): House GOP Wants to Eviscerate NASA Earth Sciences in New Budget

Elizabeth Kolbert in The New Yorker: The G.O.P.’s War on Science Gets Worse. "Ignoring a problem does often make it more difficult to solve. And that, you have to assume, in a perverse way, is the goal here."

Discover: House GOP to Humanity on Global Warming: Put on This Blindfold and Keep Marching

John Timmer of Ars Technica: House Science Committee guts NASA Earth sciences budget

Stop all harassment of all scientists now

Peer review helps fringe ideas gain credibility

The value of peer review for science and the press

The Tea Party consensus on man-made global warming

Do dissenters like climate change?

Climate myths translated into econ talk



* Photo at the top of Alster by André H. (An der Alster) [CC BY 2.0], via Wikimedia Commons
* Photo of St. Pauli by Heidas is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
* Photo of Altonaer Balkon in Hamburg by Udo Herzog (http://flickr.com/photos/udo/142872336/) [CC BY 2.0], via Wikimedia Commons
* Photo of Jenisch by Wolfram Gothe (Transferred from de.wikipedia) [Public domain], from Wikimedia Commons
* Photo of the main station of Hamburg by Thomas Fries / Lizenz: CC-BY-SA-3.0

Thursday, 30 April 2015

4th anniversary of Variable Variability


The previous birthdays I have simply forgotten, my apologies, but today is the 4th anniversary of Variable Variability.

More than usual birthdays, the day is a bit arbitrary. Many years I already wrote the occasional essay for my homepage. Ideas that I wanted to share, but were not big enough for an article or thoughts that I was afraid I would forget again with my terrible memory.

The first post of Variable Variability on the 30th of April 2011 was thus a republication of an essay from 2004 on the fractal nature of clouds and the limits of the fractal approximation.

The first two posts specially written for Variable Variability are: Is obesity bias evolutionary? and Good ideas, motivation and economics, both on the 28th of May 2011 somehow. Must have been a writing-fever day.

They are still good posts, but not much read because of the time they were published. To reduce your weight it may help to consciously eat less and move more, but that is an empirical question, it is not as obvious as many claim. Science is a creative profession and knowing where good ideas come from can thus be helpful. The second skill would be a good intuition, which of these problems to tackle. You also need technical skills, but they are overrated.

Another contender for the beginning of this blog would be January 2012, when I posted a press release on the famous homogenization validation study of the COST Action HOME. From a meeting several years before, I vaguely knew that Roger Pielke Sr. was interested in homogenization. Thus I asked him if he was interested in a repost. Roger referred me to someone called Anthony Watts of Watts Up With That. Coming from a good source, Watts asked me whether he could repost, maybe he noticed that I was not too enthusiastic (I had read some WUWT posts by that time) or maybe he read the post (and did not like the message that homogenization methods improve temperature trend estimates) because in the end he did not post my press release.

To promote the press release I started to look a bit more at other blogs and comment there. Now I see this as part of blogging, thus you could see this as the start of blogging.

Mitigation sceptics seem to have a tendency to think that scientific articles are written for the purpose of pissing them off. But they are not that important. Most scientists hardly know they exist. I did know that there was some trouble on the other side of the Atlantic in the USA, that a larger group there was sceptical about climate change. Being sceptical is sympathetic and there are so many real uncertainties. So why not? I was completely unprepared for the industrial production of deceit and misinformation and the rock bottom quality of the nonsense these people made up. I still wonder why mitigation sceptics do not orient themselves by the real uncertainties and problems; they are spelled out in detail in every IPCC report. It sometimes looks like winning a debate is less important to the most vocal mitigation sceptics than angering greenies, which is best done by stubbornly repeating the most stupid arguments you have. (In the light of the discussion we are just having at ATTP, let me add that I this is not an argument, I do not think this paragraph will convince anyone of climate change, I am just describing how I personally see the situation.)

Like the blogger of And Then There's Physics, I naively thought that it may be helpful to explain the mistakes in the posts of WUWT. Now keeping more of an eye on other blogs, I noticed a WUWT post about an erroneous conference abstract by Koutsoyiannis, which was called a "new peer reviewed paper". I wrote: "As a goal-oriented guy, Anthony Watts found the two most erroneous statements". That was the first time I got some readers and at least Watts admitted that there was no peer review. The other errors remained.

Earlier this year I wrote a short update:
Now two and a half year later, there is still no new activity on this [Koutsoyiannis] study. I guess that that means that Koutsoyiannis admits his conference abstract did not show what the State of the Climate and WUWT wanted you to believe. Together with all the other WUWT fails, it is clear that someone interested in climate change should never get his information from WUWT.
But well, who is listening to me, I do not have the right party book.

A little later Anthony Watts put WUWT on hold for the weekend because he was working on something big. He had written a manuscript and a press release and asked for blog reviews. Thus I wrote a blog review about this still unpublished manuscript. This was the first time this blog got a decent number of comments and the time people informed each other in comments elsewhere that there was this new blog on the block. That was fun.

I have somewhat given up on debunking misinformation and aim to write more about what is really important. Sometimes the misinformation is a nice hook to get people interested. I had wanted to announce the birth of the new Task Team on Homogenization (TT-HOM) anyway. By connecting this to the review of the UK Policy Foundation with silly loaded questions, more people now know about TT-HOM than otherwise; normally setting up a a new Task Team would have been a boring bureaucratic act not much people would be interested in.

Debunking does not seem to have much impact, even in cases of clear misquotations and when Watts deceives his readers about his number of readers, there is no bulge. The number of WUWT readers is a small matter, but it does not require any science skills to check, everyone can see this was deception. However, no one complained about being lied to.

America's problems with mitigation sceptics are not because of the science, mitigation sceptics seem to make this claim to make adult political debate about (free market) solutions impossible. By not complaining about deception they make clear that their issue is not the science. Their response to a new scientific paper is almost completely determined by the question: can it be spun into a case against mitigation? If they were interested in science, their response would be determined by the quality of the study. America's problems with mitigation sceptics can thus also not be solved by science or science communication. The American society will have to find a political solution for their toxic political atmosphere. Maybe that simply starts by talking to your neighbour, even if he has the wrong bumper sticker. Go on the street and make it clear to everyone this is important and something to think more about and make it clear to politicians that it is costly to peddle nonsense.

Wishes

Blogging has been a lot of fun. I like writing, playing with ideas and debating.

The first two posts got 50 to 100 pageviews up to now. They were purely written for fun.

If you write a scientific article you can also be happy to get a 100 readers. But if it inspires someone else to continue the thought, to build on it, it is worth it.

Compared to the number of readers a scientist is used to, this blog is a huge megaphone. Thank you all for reading, your interest is very rewarding and nowadays another reason to write posts. With that megaphone also comes responsibility. I guess that mitigation sceptics will not like much of what I write, but when colleagues do not like something I hope for their honest feedback. That would be valuable and is much appreciated.

Blogging has given me a broader perspective on climatology and science in general. Where else would I have heard of the beautiful ideas of John Zyman on the social dimension of science? (Thanks Mark!)

A birthday is a good day to wish something. Do you have anything you would like me to write about? I will not promise anything, I still have about 100 drafts in the queue, but welcome new ideas. I am always surprised how much people read my posts about how science works and the intricacies of the scientific culture. It is my daily life, almost like writing about washing the dishes. It is easy for me to have a blind spot there, thus if you have any questions on science, just shoot.

Please do not wish a post why homogenization increased the trend for some station in Kyrgyzstan. Not on a birthday.




* Photo of fireworks by AndreasToerl (Own work) [CC BY-SA 3.0], via Wikimedia Commons

Monday, 27 April 2015

Two new reviews of the homogenization methods used to remove non-climatic changes

By coincidence this week two initiatives have been launched to review the methods to remove non-climatic changes from temperature data. One initiative was launched by the Global Warming Policy Foundation (GWPF), a UK free-market think tank. The other by the Task Team on Homogenization (TT-HOM) of the Commission for Climatology (CCl) of the World meteorological organization (WMO). Disclosure: I chair the TT-HOM.

The WMO is one of the oldest international organizations and has meteorological and hydrological services in almost all countries of the world as its members. The international exchange of weather data has always been important for understanding the weather and to make weather predictions. The main role of the WMO is to provide guidance and to define standards that make collaboration easier. The CCl coordinates climate research, especially when it comes to data measured by national weather services.

The review on homogenization, which the TT-HOM will write, is thus mainly aimed at helping national weather services produce better quality datasets to study climate change. This will allow weather services to provide better climate services to help their nations adapt to climate change.

Homogenization

Homogenization is necessary because much has happened in the world between the French and industrial revolutions, two world wars, the rise and fall of communism, and the start of the internet age. Inevitably many changes have occurred in climate monitoring practices. Many global datasets start in 1880, the year toilet paper was invented in the USA and 3 decades before the Ford Model T.

As a consequence, the instruments used to measure temperature have changed, the screens to protect the sensors from the weather have changed and the surrounding of the stations has often been changed and stations have been moved in response. These non-climatic changes in temperature have to be removed as well as possible to make more accurate assessments of how much the world has warmed.

Removing such non-climatic changes is called homogenization. For the land surface temperature measured at meteorological stations, homogenization is normally performed using relative statistical homogenizing methods. Here a station is compared to its neighbours. If the neighbour is sufficiently nearby, both stations should show about the same climatic changes. Strong jumps or gradual increases happening at only one of the stations indicate a non-climatic change.

If there is a bias in the trend, statistical homogenization can reduce it. How well trend biases can be removed depends on the density of the network. In industrialised countries a large part of the bias can be removed for the last century. In developing countries and in earlier times removing biases is more difficult and a large part may remain. Because many governments unfortunately limit the exchange of climate data, the global temperature collections can also remove only part of the trend biases.

Some differences

Some subtle differences. The Policy Foundation has six people from the UK, Canada and the USA, who do not work on homogenization. The WMO team has nine people who work on homogenization from Congo, Pakistan, Peru, Canada, the USA, Australia, Hungary, Germany, and Spain.

The TT-HOM team has simply started outlining their report. The Policy Foundation creates spin before they have results with publications in their newspapers and blogs and they showcase that they are biased to begin with when they write [on their homepage]:
But only when the full picture is in will it be possible to see just how far the scare over global warming has been driven by manipulation of figures accepted as reliable by the politicians who shape our energy policy, and much else besides. If the panel’s findings eventually confirm what we have seen so far, this really will be the “smoking gun”, in a scandal the scale and significance of which for all of us can scarcely be exaggerated.
My emphasis. Talk about hyperbole by the click-whore journalists of the Policy Foundation. Why buy newspapers when their articles are worse than a random page on the internet? The Policy Foundation gave their team a very bad start.

Aims of the Policy Foundation

Hopefully, the six team members of the Policy Foundation will realise just how naive and loaded the questions they were supposed to answer are. The WMO has asked us whether we as TT-HOM would like to update our Terms of Reference; we are the experts after all. I hope the review team will update theirs, as that would help them to be seen as scientists seriously interested in improving science. Their current terms of reference are printed in italics below.

The panel is asked to examine the preparation of data for the main surface temperature records: HadCRUT, GISS, NOAA and BEST. For this reason the satellite records are beyond the scope of this inquiry.

I fail to see the Policy Foundation asking something without arguments as a reason.

The satellite record is the most adjusted record of them all. The raw satellite data does not show much trend at all and initially even showed a cooling trend. All of the warming in this uncertain and short dataset is thus introduced the moment the researchers remove the non-climatic changes (differences between satellites, drifts in their orbits and the height of the satellites, for example). A relatively small error in these adjustments, thus quickly leads to large trend errors.

While independent studies for the satellite record are sorely missing, a blind validation study for station data showed that homogenization methods work. They reduce any temperature trend biases a dataset may have for reasonable scenarios. For this blind validation study we produced a dataset that mimics a real climate network with known non-climatic changes, so that we knew what the answer should be. We have a similar blind validation of the method used by NOAA to homogenize its global land surface data.

The following questions will be addressed.

1. Are there aspects of surface temperature measurement procedures that potentially impair data quality or introduce bias and need to be critically re-examined?

Yes. A well-known aspect is the warming bias due to urbanization. This has been much studied and was found to produce only a small warming bias. A likely reason is that urban stations are regularly relocated to less urban locations.

On the other hand, the reasons for a cooling bias in land temperatures have been studied much too little. In a recent series, I mention several reasons why current measurements are cooler than those in the past: changes in thermometer screens, relocations and irrigation. At this time we cannot tell how important each of these individual reasons is. Any of these reasons is potentially important enough to explain the 0.2°C per century cooling trend bias found in the GHNv3 land temperatures. The reasons mentioned above could together explain a much larger cooling trend bias, which could dramatically change our assessment of the progress of global warming.

2. How widespread is the practice of adjusting original temperature records? What fraction of modern temperature data, as presented by HadCRUT/GISS/NOAA/BEST, are actual original measurements, and what fraction are subject to adjustments?

Or as Nick Stokes put it "How widespread is the practice of doing arithmetic?" (hat tip HotWhopper.)

Almost all longer station measurement series contain non-climatic changes. There is about one abrupt non-climatic change every 15 to 20 years. I know of two long series that are thought to be homogeneous: Potsdam in Germany and Mohonk Lake, New York, USA. There may be a few more. If you know more please write a comment below.

It is pretty amazing that the Policy Foundation knows so little about climate data that it asked its team to answer such a question. A question everyone working on the topic could have answered. A question that makes most sense when seen as an attempt to deceive the public and insinuate that there are problems.

3. Are warming and cooling adjustments equally prevalent?

Naturally not.

If we were sure that warming and cooling adjustments were of the same size, there would be no need to remove non-climatic changes from climate data before computing a global mean temperature signal.

It is known in the scientific literature that the land temperatures are adjusted upwards and the ocean temperatures are adjusted downwards.

It is pretty amazing that the Policy Foundation knows so little about climate data that it asked its team to answer such a question. A question everyone working on the topic could have answered. A question that makes most sense when seen as an attempt to deceive the public and insinuate that there are problems.

4. Are there any regions of the world where modifications appear to account for most or all of the apparent warming of recent decades?

The adjustments necessary for the USA land temperatures happen to be large, about 0.4°C.

That is explained by two major transitions: a change in the time of observation from afternoons to mornings (about 0.2°C) and the introduction of automatic weather stations (AWS), which in the USA happens to have produced a cooling bias of 0.2°C. (The bias due to the introduction of AWS depends on the design of the AWS and the local climate and thus differs a lot from network to network.)

The smaller the meteorological network or region you consider, the larger the biases you can find. Many of them average out on a global scale.

5. Are the adjustment procedures clearly documented, objective, reproducible and scientifically defensible? How much statistical uncertainty is introduced with each step in homogeneity adjustments and smoothing?

The adjustments to the global datasets are objective and reproducible. These datasets are so large that there is no option other than processing them automatically.

The GHCN raw land temperatures are published, the processing software is published, everyone can repeat it. The same goes for BEST and GISS. Clearly documented and defensible are matters if opinion and this can always be improved. But if the Policy Foundation is not willing to read the scientific literature, clear documentation does not help much.

Statistical homogenization reduces the uncertainty of large-scale trend errors. Another loaded question.

Announcement full of bias and errors

Also the article [by Christopher Booker in the Telegraph and reposted by the Policy Foundation] announcing the review by the Policy Foundation is full of errors.

Booker: The figures from the US National Oceanic and Atmospheric Administration (NOAA) were based, like all the other three official surface temperature records on which the world’s scientists and politicians rely, on data compiled from a network of weather stations by NOAA’s Global Historical Climate Network (GHCN).

No, the Climate Research Unit and BEST gather data themselves. They do also use GHCN land surface data, but would certainly notice if that data showed more or less global warming than their other data sources.

Also the data published by national weather services show warming. If someone assumes a conspiracy, it would be a very large one. Real conspiracies tend to be small and short.

Booker: But here there is a puzzle. These temperature records are not the only ones with official status. The other two, Remote Sensing Systems (RSS) and the University of Alabama (UAH), are based on a quite different method of measuring temperature data, by satellites. And these, as they have increasingly done in recent years, give a strikingly different picture.

The long-term trend is basically the same. The satellites see much stronger variability due to El Nino, which make them better suited for cherry picking short periods, if one is so inclined, or for single months, if one is a Policy Foundation.

Booker:In particular, they will be wanting to establish a full and accurate picture of just how much of the published record has been adjusted in a way which gives the impression that temperatures have been rising faster and further than was indicated by the raw measured data.

None of the studies using the global mean temperature will match this criterion because contrary to WUWT wisdom the adjustments reduce the temperature trend, which gives the "impression" that temperatures have been rising more slowly and less than was indicated by the raw measured data.

The homepage of the Policy Foundation team shows a graph for the USA (in Fahrenheit), reprinted below. This is an enormous cherry pick. The adjustments necessary for the USA land temperatures happen to be large and warming, about 0.4°C. The reasons for this were explained above in the answer to GWPF question 4.



That the US non-climatic changes are large relative to other regions should be known to somewhat knowledgeable people. Presented without context on the homepage of the Policy Foundation and The Telegraph, it will fool the casual reader by suggesting that this is typical.

[UPDATE. I have missed one rookie mistake. Independent expert Zeke Hausfather says: Its a bad sign that this new effort features one graph on their website: USHCN version 1 adjusted minus raw. Unfortunately, USHCN v1 was replaced by USHCN v2 (with the automated PHA rather than manual adjustments) about 8 years ago. The fact that they are highlighting an old out-of-date adjustment graph is, shall we say, not a good sign.]

For the global mean temperature, the net effect of all adjustments is a reduction in the warming. The raw records show a stronger warming due to non-climatic changes, which climatologists reduce by homogenization.

Thus what really happens is the opposite of what happens to the USA land temperatures shown by the Policy Foundation. They do not show this because it does not fit their narrative of activist scientists, but this is the relevant temperature record with which to assess the magnitude of global warming and thus the relevant adjustment.



Previous reviews

I am not expecting serious journalists to write about this. [UPDATE. Okay, I was wrong about that.] Maybe later, when the Policy Foundation shows their results and journalists can ask independent experts for feedback. However, just in case, here is an overview of real work to ascertain the quality of the station temperature trend.

In a blind validation study we showed that homogenization methods reduce any temperature trend biases for reasonable scenarios. For this blind validation study we produced a dataset that mimics a real climate network. Into this data we inserted known non-climatic changes, so that we knew what the answer should be and could judge how well the algorithms work. It is certainly possible to make a scenario in which the algorithms would not work, but to the best of our understanding such scenarios would be very unrealistic.

We have a similar blind validation of the method used by NOAA to homogenize its global land surface data.

The International Surface Temperature Initiative (ISTI) has collected a large dataset with temperature observations. It is now working on a global blind validation dataset, with which we will not only be able to say that homogenization methods improve trend estimates, but also to get a better numerical estimate of by how much. (In more data sparse regions in developing countries, the methods probably cannot improve the trend estimate much, the previous studies were for Europe and the USA).

Then we have BEST by physicist Richard Muller and his group of non-climatologists who started working on the quality of station data. They basically found the same result as the mainstream climatologists. This group actually put in work and developed an independent method to estimate the climatic trends, rather than just do a review. The homogenization method from this group was also applied to the NOAA blind validation dataset and produced similar results.

We have the review and guidance of the World Meteorological Organization on homogenization from 2003. The review of the Task Team on Homogenization will be an update of this classic report.

Research priorities

The TT-HOM has decided to focus on monthly mean data used to establish global warming. Being a volunteer effort we do not have the resources to tackle the more difficult topic of changes to extremes in detail. If someone has some money to spare, that is where I would do a review. That is a seriously difficult topic where we do not know well how accurately we can remove non-climatic problems.

And as mentioned above, a good review of the satellite microwave temperature data would be very valuable. Satellite data is affected by strong non-climatic changes and almost its entire trend is due to homogenization adjustments; a relatively small error in the adjustments thus quickly leads to large changes in their trend estimates. At the same time I do not know of a (blind) validation study nor of an estimate of the uncertainty in satellite temperature trends.

If someone has some money to spare, I hope it is someone interested in science, no matter the outcome, and not a Policy Foundation with an obvious stealth agenda, clearly interested in a certain outcome. It is good that we have science foundations and universities to fund most of the research; funders who are interested in the quality of the research rather than the outcome.

The interest is appreciated. Homogenization is too much of a blind spot in climate science. As Neville Nicholls, one of the heroes of the homogenization community, writes:
When this work began 25 years or more ago, not even our scientist colleagues were very interested. At the first seminar I presented about our attempts to identify the biases in Australian weather data, one colleague told me I was wasting my time. He reckoned that the raw weather data were sufficiently accurate for any possible use people might make of them.
One wonders how this colleague knew this without studying it.

In theory it is nice that some people find homogenization so important as to do another review. It would be better if those people were scientifically interested. The launch party of the Policy Foundation suggests that they are interested in spin, not science. The Policy Foundation review team will have to do a lot of work to recover from this launch party. I would have resigned.


Related reading

Just the facts, homogenization adjustments reduce global warming

HotWhopper must have a liberal billionaire and a science team behind her. A great, detailed post: Denier Weirdness: A mock delegation from the Heartland Institute and a fake enquiry from the GWPF

William M. Connolley gives his candid take at Stoat: Two new reviews of the homogenization methods used to remove non-climatic changes

Nick Stokes: GWPF inquiring into temperature adjustments

And Then There's physics: How many times do we have to do this?

The Independent: Leading group of climate change deniers accused of creating 'fake controversy' over claims global temperature data may be inaccurate

Phil Plait at Bad Astronomy comment on the Telegraph piece: No, Adjusting Temperature Measurements Is Not a Scandal

John Timmer at Ars Technica is also fed up with being served the same story about some upward adjusted stations every year: Temperature data is not “the biggest scientific scandal ever” Do we have to go through this every year?

The astronomer behind the blog "And Then There's Physics" writes why the removal of non-climatic effects makes sense. In the comments he talks about adjustments made to astronomical data. Probably every numerical observational discipline of science performs data processing to improve the accuracy of their analysis.

Steven Mosher, a climate "sceptic" who has studied the temperature record in detail and is no longer sceptical about that reminds of all the adjustments demanded by the "sceptics".

Nick Stokes, an Australian scientist, has a beautiful post that explains the small adjustments to the land surface temperature in more detail.

Statistical homogenisation for dummies

A short introduction to the time of observation bias and its correction

New article: Benchmarking homogenisation algorithms for monthly data

Bob Ward at the Guardian: Scepticism over rising temperatures? Lord Lawson peddles a fake controversy

Friday, 24 April 2015

I set a WMO standard and all I got was this lousy Hirsch index - measuring clouds and rain

Photo of lidar ceilometer in front of WMO building

This week we had the first meeting of the new Task Team on Homogenization of the Commission for Climatology. More on this later. This meeting was at the headquarters of the World Meteorological Organization (WMO) in Geneva, Switzerland. I naturally went by train (only 8 hours), so that I could write about scientists flying to meetings without having to justify my own behaviour.

The WMO naturally had to display meteorological instruments in front of the entrance. They are not exactly ideally sited, but before someone starts screaming: the real observations are made at the airport of Geneva.

What was fun for me to see was that they tilted their ceilometer under a small angle. In the above photo, the ceilometer is the big white instrument on the front right of the lodge. A ceilometer works by the same principle as a radar, but it works with light and is used to measure the height of the cloud base. It sends out a short pulse of light and detects how long (short) it takes until light scattered by the cloud base returns to the instrument. The term radar stands for RAdio Detection And Ranging. A ceilometer is a simple type of lidar: LIght Detection And Ranging.

For my PhD and first postdoc I worked mostly on cloud measurements and we used the same type of ceilometer, next to many other instruments. Clouds are very hard to measure and you need a range of instruments to get a reasonable idea of how a cloud looks like. The light pulse of the ceilometer extinguishes very fast in a water cloud. Thus just like we cannot see into a cloud with our eyes, the ceilometer cannot do much more than detect the cloud base.

We also used radars, the radiowaves transmitted by a radar are only weakly scattered by clouds. This means that the radio pulses can penetrate the cloud and you can measure the cloud top height. Radiowaves, however, scatter large droplets much much stronger than small ones. The small freshly developed cloud droplets that are typically found at the cloud base are thus often not detected by the radar. Combining both radar and lidar, you can measure the cloud extend of the lowest cloud layer reasonably accurately.

You can also measure the radiowaves emitted by the atmosphere with a so-called radiometer. If you do so at multiple wavelengths that gives you an idea of the total amount of cloud water in the atmosphere, but it is hard to say at which height the clouds are, but we know that from the lidar and radar. If you combine radar, ceilometer and radiometer, you can measure the clouds quite accurately.

To measure very thin clouds, which the radiowave radiometer does not see well, you can add an infra-red (heat radiation) radiometer. Like the radar, the infra-red radiometer cannot look into thick clouds, for which the radiowave radiometer is thus important. And so on.

Cheery tear drops illustrate the water cycle for kids
Cheery tear drops illustrate the water cycle for kids. You may think that every drop of rain that falls from the sky or each glass of water that you drink, is brand new, but it has always been here and is part of the The Water Cycle.

Why is the lidar tilted? That is because of the rain. People who know rain from cartoons may think that a rain drop is elongated like a tear drop or like a drop running down a window. Free falling rain drops are, however, actually wider than high. Small ones are still quite round due to the surface tension of the droplet, but larger ones deform more easily. Larger drops fall faster and thus experience more friction by the air. This friction is strongest in the middle and makes the droplet broader than high. If a rain drop gets really big the drop base can become flat and even get a dip in the middle of its base. The next step would be that the friction breaks up the big drop.

If a lidar is pointed vertically, it will measure the light reflected back by the flattened base of the rain drops. When their base is flat, drops will reflect almost like a mirror. If you point the lidar at an angle, the surface of the drop will be rounder and the drop will reflect the light in a larger range of directions. Thus the lidar will measure less reflected light coming back from rain drops when it is tilted. Because the aim of the ceilometer is to measure the base of the cloud, it helps not to see the rain too much. That improves the contrast.

I do not know if anyone uses lidar to estimate the rain rate, there are better instruments for that, but even in that case, the small tilt is likely beneficial. It makes the relationship between the rain rate and the amount of back scattered light more predictable, because it depends less on the drop size.

The large influence of the tilting angle of the lidar can be seen in the lidar measurement below. What you see is the height profile of the amount of scattered light for a period of about an hour. During this time, I have changed the titling angle of the lidar every few minutes to see whether this makes a difference. The angle away from the vertical in degrees is written near the bottom of the measurement. In the rain, below 1.8 km, you can see the above explained effect of the tilting angle.


The lidar backscatter (Vaisala CT-75K) in the rain as a function of the pointing angle (left). The angle in degrees is indicated by the big number at the bottom (zenith = 0). The right panel shows the profiles of the lidar backscatter, radar reflectivity (dBZ), and radar velocity (m/s) from the beginning (till 8.2 hrs) of the measurement. For more information see this conference contribution.

In the beginning of the above measurement (until 8.2h), you can see a layer with only small reflections at 1.8 km. This is the melting layer where snow and ice melts into rain droplets. Thus the small reflections you see between 2.5 and 2 km are the snow falling from the cloud, which is seen as a strong reflection at 2.5 km.

An even more dramatic example of a melting layer can be seen below at 2.2 km. The radar sees the melting layer as a strongly reflecting layer, whereas the melting layer is a dark band for the lidar.


Graph with radar reflection for 23rd April; click for bigger version.

Graph with lidar reflection for 23rd April; click for bigger version.

The snow reflects the light of the lidar stronger than the melting particles. When the snow or ice particle melts into rain drops, they become more transparent. Just watch a snowflake or hailstone melt in your hand. Snowflakes, furthermore, collapse and become smaller and the number of particles per volume decreases because the melted particles fall faster. These effects reduce the reflectivity in the top of the melting layer where the snow melts.

What is still not understood is why the reflectivity of the particles increases again below the melting layer. I was thinking of specular reflections by the flat bottoms of the rain drops, which develop when the particles are mostly melted and fall fast. However, you can also see this increase in reflections below the melting layer in the tilted lidar measurements. Thus specular reflections cannot explain it fully.

Another possible explanation would be if the snowflake is very large, the drop it produces is too large to be stable and explodes in many small drops. This would increase the total surface of the drops a lot and the amount of light that is scattered back depends mainly of the surface. This probably does not happen so explosively in nature as in the laboratory example below, but maybe it contributes some.



To be honest, I am not sure whether we were the first ones to tilt the lidar to see the cloud base better. It is very well possible that the instrument can be tilted like for this purpose. But if we were and the custom spread all the way the WMO headquarters, it would be one of the many ideas and tasks academics perform that does not lead to more citations or a better [[Hirsch index]]. These citations are unfortunately the main way in which managers and bureaucrats nowadays measure scientific output.

For my own publications, which I know best, I can clearly say that if I rank them for my own estimate of how important they are, you will get a fully different list than when you rank them for the number of citations. These two ranked lists are related, but only to a small degree.

The German Science Foundation (DFG) thus also rightly rejects in its guidelines on scientific ethics the assessment of individuals or small groups by their citation metrics (page 22). When you send a research proposal to the DFG you have to indicate that you read these guidelines. I am not sure whether all people involved with the DFG have read the guidelines, though.


Further information

A collection of beautiful remote sensing measurements.

On cloud structure. Essay on the fractal beauty of clouds and the limits of the fractal approximation.

Wired: Should We Change the Way NSF Funds Projects? Trust scientists more. Science is wasteful, if we knew the outcome in advance, it would not be science.

On consensus and dissent in science - consensus signals credibility.

Peer review helps fringe ideas gain credibility.

Are debatable scientific questions debatable?


* Cartoon of tear shaped rain drop by USGS. The diagram of raindrop shapes is from NASA’s Precipitation Measurement Missions. Both can thus considered to be in the U.S. public domain.

Wednesday, 15 April 2015

Why raw temperatures show too little global warming

In the last few amonths I have written several posts why raw temperature observations may show too little global warming. Let's put it all in perspective.

People who have followed the climate "debate" have probably heard of two potential reasons why raw data shows too much global warming: urbanization and the quality of the siting. These are the two non-climatic changes that mitigation sceptics promote claiming that they are responsible for a large part of the observed warming in the global mean temperature records.

If you only know of biases producing a trend that is artificially too strong, it may come as a surprise that the raw measurements actually have too small a trend and that removing non-climatic changes increases the trend. For example, in the Global Historical Climate Network (GHCNv3) of NOAA, the land temperature change since 1880 is increased by about 0.2°C by the homogenization method that removes non-climatic changes. See figure below.

(If you also consider the adjustments made to ocean temperatures, the net effect of the adjustments is that they make the global temperature increase smaller.)


The global mean temperature estimates from the Global Historical Climate Network (GHCNv3) of NOAA, USA. The red curve shows the global average temperature in the raw data. The blue curve is the global mean temperature after removing non-climatic changes. (Figure by Zeke Hausfather.)

The adjustments are not always that "large". The Berkeley Earth group may much smaller adjustments. The global mean temperature of Berkeley Earth is shown below. However, as noted by Zeke Hausfather in the comments below, also the curve where the method did not explicitly detect breakpoints does homogenize the data partially because it penalises stations that have a very different trend than their neighbours. After removal of non-climatic changes BEST come to a similar climatic trend as seen in GHCNv3.


The global mean temperature estimates from the Berkeley Earth project (previously known as BEST), USA. The blue curve is computed without using their method to detect breakpoints, the red curve the temperature after adjusting for non-climatic changes. (Figure by Steven Mosher.)

Let's go over the reasons why the temperature trend may show too little warming.
Urbanization and siting
Urbanization warms the location of a station, but these stations also tend to move away from the centre to better locations. What matters is where the stations were in the beginning of the observation and where they are now. How much too warm the origin was and how much too warm the ending. This effect has been studied a lot and urban stations seem to have about the same trend as their surrounding (more) rural stations.
 
A recent study for two villages showed that the current location of the weather station is half a degree centigrade cooler than the centre of the village. Many stations started in villages (or cities), thermometers used to be expensive scientific instruments operated by highly educated people and they had to be read daily. Thus the siting of many stations may have improved, which would lead to a cooling bias.
 
When a city station moves to an airport, which happened a lot around WWII, this takes the station (largely) out of the urban heat island. Furthermore, cities are often located near the coast and in valleys. Airports may thus often be located at a higher altitude. Both reasons could lead to a considerable cooling for the fraction of stations that moved to airports.
 
Changes in thermometer screens
During the 20th century the Stevenson screen was established as the dominant thermometer screen. This screen protected the thermometer much better against radiation (solar and heat) than earlier designs. Deficits of earlier measurement methods have artificially warmed the temperatures in the 19th century.
 
Some claim that earlier Stevenson screens were painted with inferior paints. The sun consequently heats up the screen more, which again heats the incoming air. The introduction of modern durable white paints may thus have produced a cooling bias.
 
Currently we are in a transition to Automatic Weather Stations. This can show large changes in either direction for the network they are introduced in. What the net global effect is, is not clear at the moment.
 
Irrigation
Irrigation on average decreases the 2m-temperature by about 1 degree centigrade. At the same time, irrigation has spread enormously during the last century. People preferentially live in irrigated areas and weather stations serve agriculture. Thus it is possible that there is a higher likelihood that weather stations are erected in irrigated areas then elsewhere. In this case irrigation could lead to a spurious cooling trend. For suburban stations an increase of watering gardens could also produce a spurious cooling trend.
It is understandable that in the past the focus was on urbanization as a non-climatic change that could make the warming in the climate records too strong. Then the focus was on whether climate change was happening (detection). To make a strong case, science had to show that even the minimum climatic trend was too large to be due to chance.

Now that we know that the Earth is warming, we no longer just need a minimum estimate of the temperature trend, but the best estimate of the trend. For a realistic assessment of models and impacts we need the best estimate of the trend, not just the minimum possible trend. Thus we need to understand the reasons why raw records may show too little warming and quantify these effects.

Just because the mitigation skeptics are talking nonsense about the temperature record does not mean that there are no real issues with the data and it does not mean that statistical homogenization can remove trend errors sufficiently well. This is a strange blind spot in climate science. As Neville Nicholls, one of the heroes of the homogenization community, writes:
When this work began 25 years or more ago, not even our scientist colleagues were very interested. At the first seminar I presented about our attempts to identify the biases in Australian weather data, one colleague told me I was wasting my time. He reckoned that the raw weather data were sufficiently accurate for any possible use people might make of them.
One wonders how this colleague knew this without studying it.

The reasons for a cooling bias have been studied much too little. At this time we cannot tell which reason is how important. Any of these reasons is potentially important enough to be able to explain the 0.2°C per century trend bias found in GHNv3. Especially in the light of the large range of possible values, a range that we can often not even estimate at the moment. In fact, all the above mentioned reasons could together explain a much larger trend bias, which could dramatically change our assessment of the progress of global warming.

The fact is that we cannot quantify the various cooling biases at the moment and it is a travesty that we can't.


Other posts in this series

Irrigation and paint as reasons for a cooling bias

Temperature trend biases due to urbanization and siting quality changes

Changes in screen design leading to temperature trend biases

Temperature bias from the village heat island