Wednesday, 27 August 2014

A database with parallel climate measurements

By Renate Auchmann and Victor Venema


A parallel measurement with a Wild screen and a Stevenson screen in Basel, Switzerland. Double-Louvre Stevenson screens protect the thermometer well against influences of solar and heat radiation. The half-open Wild screens provide more ventilation, but were found to be affected too much by radiation errors. In Switzerland they were substituted by Stevenson screens in the 1960s.

We are building a database with parallel measurements to study non-climatic changes in the climate record. In a parallel measurement, two or more measurement set-ups are compared to each other at one location. Such data is analyzed to see how much a change from one set-up to another affects the climate record.

This post will first give a short overview of the problem, some first achievements and will then describe our proposal for a database structure. This post's main aim is to get some feedback on this structure.

Parallel measurements

Quite a lot of parallel measurements are performed, see this list for a first selection of datasets we found, however they have often only been analyzed for a change in the mean. This is a pity because parallel measurements are especially important for studies on non-climatic changes in weather extremes and weather variability.

Studies on parallel measurements typically analyze single pairs of measurements, in the best cases a regional network is studied. However, the instruments used are often somewhat different in different networks and the influence of a certain change depends on the local weather and climate. Thus to draw solid conclusions about the influence of a specific change on large-scale (global) trends, we need large datasets with parallel measurements from many locations.

Studies on changes in the mean can be relatively easily compared with each other to get a big picture. But changes in the distribution can be analyzed in many different ways. To be able to compare changes found at different locations, the analysis needs to be performed in the same way. To facilitate this, gathering the parallel data in a large dataset is also beneficial.

Organization

Quite a number of people stand behind this initiative. The International Surface Temperature Initiative and the European Climate Assessment & Dataset have offered to host a copy of the parallel dataset. This ensures the long term storage of the dataset. The World Meteorological Organization (WMO) has requested its members to help build this databank and provide parallel datasets.

However, we do not have any funding. Last July, at the SAMSI meeting on the homogenization of the ISTI benchmark, people felt we can no longer wait for funding and it is really time to get going. Furthermore, Renate Auchmann offered to invest some of her time on the dataset; that doubles the man power. Thus we have decided to simply start and see how far we can get this way.

The first activity was a one-page information leaflet with some background information on the dataset, which we will send to people when requesting data. The second activity is this blog post: a proposal for the structure of the dataset.

Upcoming tasks are the documentation of the directory and file formats, so that everyone can work with it. The data processing from level to level needs to be coded. The largest task is probably the handling of the metadata (data about the data). We will have to complete a specification for the metadata needed. A webform where people can enter this information would be great. (Does anyone have ideas for a good tool for such a webform?) And finally the dataset will have to be filled and analyzed.

Design considerations

Given the limited manpower, we would like to keep it as simple as possible at this stage. Thus data will be stored in text files and the hierarchical database will simply use a directory tree. Later on, a real database may be useful, especially to make it easier to select the parallel measurements one is interested in.

Next to the parallel measurements, also related measurements should be stored. For example, to understand the differences between two temperature measurements, additional measurements (co-variates) on, for example, insolation, wind or cloud cover are important. Also metadata needs to be stored and should be machine readable as much as possible. Without meta-information on how the parallel measurement was performed, the data is not useful.

We are interested in parallel data from any source, variable and temporal resolution. High resolution (sub-daily) data is very important for understanding the reasons for any differences. There is probably more data, especially historical data, available for coarser resolutions and this data is important for studying non-climatic changes in the means.

However, we will scientifically focus on changes in the distribution of daily temperature and precipitation data in the climate record. Thus, we will compute daily averages from sub-daily data and will use these to compute the indices of the Expert Team on Climate Change Detection and Indices (ETCCDI), which are often used in studies on changes in “extreme” weather. Actively searching for data, we will prioritize instruments that were much used to perform climate measurements and early historical measurements, which are more rare and are expected to show larger changes.

Following the principles of the ISTI, we aim to be an open dataset with good provenance, that is, it should be possible to tell were the data comes from. For this reason, the dataset will have levels with increasing degrees of processing, so that one can go back to a more primitive level if one finds something interesting/suspicious.

For this same reason, the processing software will also be made available and we will try to use open software (especially the free programming language R, which is widely used in statistical climatology) as much as possible.

It will be an open dataset in the end, but as an incentive to contribute to the dataset, initially only contributors will be able to access the data. After joint publications, the dataset will be opened for academic research as a common resource for the climate sciences. In any case people using the data of a small number of sources are requested to explicitly cite them, so that contributing to the dataset also makes the value of making parallel measurements visible.

Database structure

The basic structure has 5 levels.

0: Original, raw data (e.g. images)
1: Native format data (as received)
2: Data in a standard format at original resolution
3: Daily data
4: ETCCDI indices

In levels 2, 3 & 4 we will provide information on outliers and inhomogeneities.

Especially for the study of extremes, the removal of outliers is important. Suggestions for good software that would work for all climate regions is welcome.

Longer parallel measurements may, furthermore, also contain inhomogeneities. We will not homogenize the data, because we want to study the raw data, but we will detect breaks and provide their date and size as metadata, so that the user can work on homogeneous subperiods if interested. This detection will probably be performed at monthly or annual scales with one of the HOME recommended methods.

Because parallel measurements will tend to be well correlated, it is possible that statistically significant inhomogeneities are very small and climatologically irrelevant. Thus we will also provide information on the size of the inhomogeneity so that the user can decide whether such a break is problematic for this specific application or whether having longer time series is more important.

Level 0 - images

If possible, we will also store the images of the raw data records. This enables the user to see if an outlier may be caused by unclear handwriting or whether the observer explicitly wrote that the weather was severe that day.

In case the normal measurements are already digitized, only the parallel one needs to be transcribed. In this case the number of values will be limited and we may be able to do so. Both Bern and Bonn have facilities to digitize climate data.

Level 1 – native format

Even if it will be a more work for us, we would like to receive the data in its native format and will convert it ourselves to a common standard format. This will allow the users to see if mistakes were made in the conversion and allows for their correction.

Level 2 – standard format

In the beginning our standard format will be an ASCII format. Later on we may also use a scientific data format such as NetCDF. The format will be similar to the one of the COST Action HOME. Some changes will be needed to the filenames account for multiple measurements of the same variable at one station and for multiple indices computed from the same variable.

Level 3 - daily data

We expect that an important use of the dataset will be the study of non-climatic changes in daily data. At this level we will thus gather the daily datasets and convert the sub-daily datasets to daily.

Level 4 – ETCCDI indices

Many people use the indices to the ETCCDI to study changes in extreme weather. Thus we will precompute these indices. Also in case government policies do not allow giving out the daily data, it may sometimes be possible to obtain the indices. The same strategy is also used by the ETCCDI in regions where data availability is scarce and/or data accessibility is difficult.

Directory structure

In the main directory there are the sub-directories: data, documentation, software and articles.

In the sub-directory data there are sub-directories for the data sources with names d###; with d for data source and ### is a running number of arbitrary length.

In these directories there are up to 5 sub-directories with the levels and one directory with “additional” metadata such as photos and maps that cannot be copied in every level.

In the level 0 and level 1 directories, climate data, the flag files and the machine readable metadata are directly in this directory.

Because one data source can contain more than one station, in the levels 2 and higher there are sub-directories for the various stations. These sub-directories will be called s###; with s for station.

Once we have more data and until we have a real database, we may also provide a directory structure first ordered by the 5 levels.

The filenames will contain information on the station and variable. In the root directory we will provide machine readable tables detailing which variables can be found in which directories. So that people interested in a certain variable know which directories to read.

For the metadata we are currently considering using XML, which can be read into R. (Are the similar packages for Matlab and FORTRAN?) Suggestions for other options are welcome.

What do you think? Is this a workable structure for such a dataset? Suggestions welcome in the comments or also by mail (Victor Venema & Renate Auchmann ).

Related reading

A database with daily climate data for more reliable studies of changes in extreme weather
The previous post provides more background on this project.
CHARMe: Sharing knowledge about climate data
An EU project to improve the meta information and therewith make climate data more easily usable.
List of Parallel climate measurements
Our Wiki page listing a large number of resources with parallel data.
Future research in homogenisation of climate data – EMS 2012 in Poland
A discussion on homogenisation at a Side Meeting at EMS2012
What is a change in extreme weather?
Two possible definitions, one for impact studies, one for understanding.
HUME: Homogenisation, Uncertainty Measures and Extreme weather
Proposal for future research in homogenisation of climate network data.
Homogenization of monthly and annual data from surface stations
A short description of the causes of inhomogeneities in climate data (non-climatic variability) and how to remove it using the relative homogenization approach.
New article: Benchmarking homogenization algorithms for monthly data
Raw climate records contain changes due to non-climatic factors, such as relocations of stations or changes in instrumentation. This post introduces an article that tested how well such non-climatic factors can be removed.

Sunday, 24 August 2014

The Tea Party consensus on man-made global warming

Dan Kahan, Professor of Law and Psychology at Yale, produced a remarkable plot about the attitude towards global warming of Tea Party supporters.

Kahan of the Cultural Cognition Project is best known for his thesis that climate "sceptics" should be protected from the truth and that no one should mention the fact that there is a broad agreement (consensus) under climate scientists that we are changing the climate.

Without having the scientific papers to back it up, reading WUWT and Co. leaves one with the impression that there are many more scientific claims on climate change that would make these "sceptics" more defensive. They may actually be willing to pay not to hear them. We could use the money to stimulate renewable energy; to reduce air pollution in the West naturally, not for mitigation of global warming that would help everyone.

Tea Party

Maybe I should explain for the non-American readers that the Tea Party is a libertarian, populist and conservative political movement against taxes that gained prominence when the first "black" US president was elected.

It is well known that members of the Tea Party are more dismissive of global warming as the rest of the Republicans or Democrats in the USA. It could have been that Tea-Party members are "more Republican" as other people calling themselves Republican. The plot below by Dan Kahan suggests, however, that identifying with the Tea Party is an important additional dimension.

In fact, normal Republicans and democrats are not even that different. The polarization in the USA is to a large part due to the Tea Party. Especially, when you consider that the non-Tea-Party Republicans most to the right of the scale may still have a more tax-libertarian disposition than the ones more in the middle.

For me the most striking part is how sure Tea Party members claim to be that global warming is no problem. On average they see global warming as being a very low risk, the average is a one on a scale from seven to zero. Given how close that average is to the extreme of the scale, there cannot be that much variability. There thus probably is a consensus among Tea Party members that global warning is a low risk. That was something Kahan did not explicitly write in his post.

That is quite a consensus for a position without scientific evidence. I guess we are allowed to call this group think, given that many climate "sceptics" even call a consensus with evidence group think.



Related reading

Real conservatives are conservationists by Barry Bickmore (a conservative).
"The radical libertarians’ knee-jerk rejection of the scientific consensus on climate change isn’t just anti-Conservative. It borders on sociopathy in its extreme anti-intellectualism and recklessness."
The conservative family values of Christian man Anthony Watts
A post on the extremist and anti-intellectual atmosphere at WUWT and Co.
Planning for the next Sandy: no relative suffering would be socialist
Some people seem to be willing to suffer loses as long as others suffer more. This leads to the question: "Do dissenters like climate change?"

Monday, 28 July 2014

Is the US historical network temperature trend too strong?

Climate dissenters often claim that the observed temperature trend is not only due to global warming, but for a large part due to local effects: due to increases in urbanization around the stations or somehow because of bad micro-siting.

A few days ago I had a twitter discussion with Ronan Connolly. He and his father claim that 0.2°C per century of the temperature increase in the USA is due to urbanization and 0.1°C per century is due to micro-siting. That is quite a lot. Together it would be almost half of the temperature trend seen in the main global datasets.


One of the great things of America is that they have a climate reference network (USRCN). The observations are normally made by the meteorologists and contain non-climatic effects that are not relevant for the meteorologists, but they are to climatologists. Thus to track accurately what is happening to the climate, NOAA has set up a climate reference network that follows high climatological standards. The main thing for this post is that these stations are located in pristine locations, without any problems with urbanization and micro-siting.

We only have data from this network starting in 2005. That is only a decade of data, but if the problems with the normal data are as large as Connolly claims, I thought we might be able to see some differences between the reference network and the normal US historical Climate Network (USHCN). In the USHCN non-climatic effects have been removed as well as possible with the pairwise homogenization algorithm (PHA) of NOAA.

The figure of NOAA below (in Fahrenheit) shows that USHCN (normal) and USRCN (reference) track each other quite closely. If you look at the details, you can see that actually the USRCH is a little below USHCN in the beginning and a little above at the end. In other words, the temperatures of the reference network are warming faster than those of the normal network. The opposite of what the climate dissenters would expect.



Let's have a more detailed look at the difference between the two networks in the following graph. It shows that the warming in the reference network was 0.09°C stronger per decade. For comparison with the trend due to global warming, you could also say that it is 0.9°C stronger per century. That is just as much as the observed global warming trend.



That the trend in the normal data is an underestimate of the true warming is no surprise for me. The trend of the raw American data has a strong cooling bias. Removing of non-climatic effects (homogenization) increases the temperature trend since 1880 by 0.4°C. We also know that homogenization can make trend estimates more reliable, but cannot fully remove the bias. Thus it was likely that there was a remaining cooling bias.

The cooling bias could be due to a number of effects. An important cooling bias in the USA is the transition of conventional observations with a cotton region shelter to automatic weather stations (maximum-minimum temperature systems). This transition is almost completed and was more intense in the previous century. Other biases could be the relocation of city stations to airports. This mainly took place before and during the second world war. The increased in interest in climate change may have increased interest in urbanization and micro-siting, which may thus have improved over time due to relocations. (Does anyone know any articles on that? I only know one for Austria.) There is also a marked increase in irrigation of gardens and cropland the last century.

That the effect is this strong is something we should probably not take seriously (yet). We only have nine values and thus a large uncertainty. In addition, homogenization is less powerful near the edges of the data, you want to detect changes in the mean and should thus be able to compute a mean with sufficient accuracy. As a consequence, NOAA does not adjust the last 18 month of the data, while half of the trend is due to the last two values. Still an artificial warming of USHCN, as the climate dissenters claim, seems highly unlikely.

This cooling bias is an interesting finding. Even if we should not take the magnitude too seriously, it shows that we should study cooling biases in the climate observations with much more urgency. The past focus on detecting climate change has led to a focus on warming biases, especially urbanization. Now that that problem is cleared, we need to know the best estimate of the climatic changes and not just the minimum estimate.

Maybe even more importantly, it shows that we need climate reference networks in every country. Especially to study climatic changes in extreme weather in daily station data, data that is much harder to homogenize than the annual means. We are performing a unique experiment with our global climate system. Future scientists will never forgive us if we do not measure what is happening as accurately as we can.

[UPDATE. In case anyone wants to analyse the "dataset", here it is:
diff = [0.017 0.050 0.022 0.017 0.022 0.017 -0.006 -0.039 -0.033]; % Difference USHCN-USRCN in °C
year = 2005:2013;
]

Tuesday, 22 July 2014

Six sleep and jetlag tips

Bed time at the Hohenzollernbrücke Kölner Dom

Blogging has been light lately, I was at a workshop on statistics and homogenization in the USA. For me as old European this is another continent, 8 hours away. Thus I thought I'd share some jetlag tips, most of which are generally good sleeping tips as well. The timing is good: many people have trouble sleeping during the warm summer nights.

As far as I know, science does not really understand why we sleep. My guess would be: variability. Which is always my answer to stuff we do not understand. Most problems involving only the mean have been solved by now.

By doing the repairs and maintenance of your body at night, when there was not much else to do in the times before electrical light, you can allocate more energy to other stuff during the day and, for example, outrun someone who is repairing his cells all the time. Creating some variability in tasks between day and night thus seems to make evolutionary sense.

(The part I do not understand is why you have to lie down and close your eyes. Isn't is enough to simply rest? That seems to be so much less dangerous. But maybe the danger was not that large in bands where everyone has its own sleeping rhythm and someone is wake at most times.)

To differentiate between day and night, you need internal clocks to coordinate the action. Clocks that tell you to increase your cortisol in the hour before waking up, to get your body ready for action again. Clocks that tell you to reduce urine production during the night. Clocks that reduce the motility of your intestines while sleeping. Clocks that tell you to wind down and get ready for sleep in the evening. And so on.

These chemical clocks need to be synchronized, they do so mainly by light, but I have heard claims that also movement is signal for these clocks to keep track of the time. Without synchronization most people have an internal clock that runs one or more hours late and produces days that are longer than 24 hours. This natural period varies considerably. People who are night owls, most scientists for example, have longer internal days as early birds. I seem to be an extreme owl and can stay awake and concentrated all night. The rising sun is sometimes my last reminder that I really need to get to bed because otherwise my time becomes too much off with the rest of society.

1. Take your natural rhythm into account

Which brings me to tip 1. Or maybe experiment 1. For me as an owl, flying East is hard. It makes the day shorter and the days are already much too short for me anyway. In case of this last flight home, it made the day 8 hours shorter, not 24, but only 16 hours. Horror. Thus you have to go to bed well before you are tired and consequently cannot sleep.

My experiment was to stay awake during the flight. That made my day not 8 hours shorter, but 16 hours longer. Such a 40 hour day is probably too much for most, but given my natural long day, this seems to have worked perfectly for me. I hardly had any jetlag this time, almost like flying West, which also comes easy for me. I am curious what the experiences of others are. And can this trick be used by early birds flying West as well?

2. Light exposure

Light is vital for setting your internal clocks. Try and get as much sun as possible after your jetlag. Walk to work, take breaks outside, eat your meals outside, whatever is feasible. Often conferences are in darkened rooms, which mess up you clocks even without jetlag. Consider arriving early and spend your days before the conference outside.

Also on normal days, night owls should make sure that they get as much light exposure as possible and get outdoors early in the day to quickly tell your internal clock that it is day. It may help early birds to stay awake to seek the sun later in the day.

3. Artificial light

Artificial light, especially blue light, fools your internal clocks into thinking it is still day. If you do not become sleepy and have trouble getting to sleep, try to limit your exposure to artificial light in the evening. There are large differences in the color of the light between light bulb, select one that gives a nice warm glow and do not make the room too bright. The availability of artificial light is thought to have increased variability in sleeping times by making it easier for night owls to stay awake.

4. Blue glowing screens

Also monitors and smartphones give of a lot of blue light. I have f.lux installed on all my computers, it removes the blue light component from your monitor. I am not sure it helped me, but it cannot hurt in any way as long as the work you do is not color-sensitive. (If it sometimes it, you can easily turn it off.)

5. Pitch dark

Different Blindfolds for sleeping and resting
Make sure that your sleeping room is completely dark. This signals your clocks that it is night. Doing so improved the quality of my sleep a lot. They say this becomes more important as you age. Before putting blinders on your windows or hang up light blocking curtains, you can experiment and see if this is important for you by putting on a sleeping mask or simply lay a dark t-shirt over your eyes. (As an aside, also sleeping on a firm surface rather than a mattress improved the quality of my sleep I am curious whether other people have similar experiences.)

6. Sleep rhythm

The ideal nowadays is to sleep in one long period. This may be a quite recent invention to be able to use the evening productively using artificial light. Before people are thought to have slept a period after sunset, woke up for a few hours doing some stuff humans do and sleep another period. Even if this turns out not to be true, there is nothing wrong with sleeping in a few periods or with taking a nap. If you are awake, just get up, do something and try again later. I am writing this post in such a phase. Uncommon for me, probably due to the jetlag, I was tired at 8pm and slept two hours. When this post is finished, I will sleep the other 6 hours.

Related to this: try not to use an alarm clock. I realize this is difficult for most people due to social pressures. In this case you can set your alarm clock at a late time, so that you will often wake up before your alarm clock. Many people report waking up with gradually increasing light intensities is more pleasant, but also these devices are still an alarm clocks.

What do you think? Do you have any experience with this? Any more tips that may be useful?


Tuesday, 8 July 2014

Understanding adjustments to temperature data

by Zeke Hausfather

There has been much discussion of temperature adjustment of late in both climate blogs and in the media, but not much background on what specific adjustments are being made, why they are being made, and what effects they have. Adjustments have a big effect on temperature trends in the U.S., and a modest effect on global land trends. The large contribution of adjustments to century-scale U.S. temperature trends lends itself to an unfortunate narrative that “government bureaucrats are cooking the books”.

Slide1

Figure 1. Global (left) and CONUS (right) homogenized and raw data from NCDC and Berkeley Earth. Series are aligned relative to 1990-2013 means. NCDC data is from GHCN v3.2 and USHCN v2.5 respectively.

Having worked with many of the scientists in question, I can say with certainty that there is no grand conspiracy to artificially warm the earth; rather, scientists are doing their best to interpret large datasets with numerous biases such as station moves, instrument changes, time of observation changes, urban heat island biases, and other so-called inhomogenities that have occurred over the last 150 years. Their methods may not be perfect, and are certainly not immune from critical analysis, but that critical analysis should start out from a position of assuming good faith and with an understanding of what exactly has been done.

This will be the first post in a three-part series examining adjustments in temperature data, with a specific focus on the U.S. land temperatures. This post will provide an overview of the adjustments done and their relative effect on temperatures. ...


Read more at Climate Etc.


(ht Hotwhopper)

Friday, 27 June 2014

Self-review of problems with the HOME validation study for homogenization methods

In my last post, I argued that post-publication review is no substitute for pre-publication review, but it could be a nice addition.

This post is a post-publication self-review, a review of our paper on the validation of statistical homogenization methods, also called benchmarking when it is a community effort. Since writing this benchmarking article we have understood the problem better and have found some weaknesses. I have explained these problems on conferences, but for the people that did not hear them, please find them below after a short introduction. We have a new paper in open review that explains how we want to do better in the next benchmarking study.

Benchmarking homogenization methods

In our benchmarking paper we generated a dataset that mimicked real temperature or precipitation data. To this data we added non-climatic changes (inhomogeneities). We requested the climatologists to homogenize this data, to remove the inhomogeneities we had inserted. How good the homogenization algorithms are can be seen by comparing the homogenized data to the original homogeneous data.

This is straightforward science, but the realism of the dataset was the best to date and because this project was part of a large research program (the COST Action HOME) we had a large number of contributions. Mathematical understanding of the algorithms is also important, but homogenization algorithms are complicated methods and it is also possible to make errors in the implementation, thus such numerical validations are also valuable. Both approaches complement each other.


Group photo at a meeting of the COST Action HOME with most of the European homogenization community present. These are those people working in ivory towers, eating caviar from silver plates, drinking 1985 Romanee-Conti Grand Cru from crystal glasses and living in mansions. Enjoying the good live on the public teat, while conspiring against humanity.

The main conclusions were that homogenization improves the homogeneity of temperature data. Precipitation is more difficult and only the best algorithms were able to improve it. We found that modern methods improved the quality of temperature data about twice as much as traditional methods. It is thus important that people switch to one of these modern methods. My impression from the recent Homogenisation seminar and the upcoming European Meteorological Society (EMS) meeting is that this seems to be happening.

1. Missing homogenization methods

An impressive number of methods participated in HOME. Also many manual methods were applied, which are validated less because this is more work. All the state-of-the-art methods participated and most of the much used methods. However, we forgot to test a two- or multi-phase regression method, which is popular in North America.

Also not validated is HOMER, the algorithm that was designed afterwards using the best parts of the tested algorithms. We are working on this. Many people have started using HOMER. Its validation should thus be a high priority for the community.

2. Size breaks (random walk or noise)

Next to the benchmark data with the inserted inhomogeneities, we also asked people to homogenize some real datasets. This turned out to be very important because it allowed us to validate how realistic the benchmark data is. Information we need to make future studies more realistic. In this validation we found that the size of the benchmark in homogeneities was larger than those in the real data. Expressed as the standard deviation of the break size distribution, the benchmark breaks were typically 0.8°C and the real breaks were only 0.6°C.

This was already reported in the paper, but we now understand why. In the benchmark, the inhomogeneities were implemented by drawing a random number for every homogeneous period and perturbing the original data by this amount. In other words, we added noise to the homogeneous data. However, the homogenizers that requested to make breaks with a size of about 0.8°C were thinking of the difference from one homogeneous period to the next. The size of such breaks is influenced by two random numbers. Because variances are additive, this means that the jumps implemented as noise were the square root of two (about 1.4) times too large.

The validation showed that, except for the size, the idea of implementing the inhomogeneities as noise was a good approximation. The alternative would be to draw a random number and use that to perturb the data relative to the previously perturbed period. In that case you implement the inhomogeneities as a random walk. Nobody thought of reporting it, but it seems that most validation studies have implemented their inhomogeneities as random walks. This makes the influence of the inhomogeneities on the trend much larger. Because of the larger error, it is probably easier to achieve relative improvements, but because the initial errors were absolutely larger, the absolute errors after homogenization may well have been too large in previous studies.

You can see the difference between a noise perturbation and a random walk by comparing the sign (up or down) of the breaks from one break to the next. For example, in case of noise and a large upward jump, the next change is likely to make the perturbation smaller again. In case of a random walk, the size and sign of the previous break is irrelevant. The likeliness of any sign is one half.

In other words, in case of a random walk there are just as much up-down and down-up pairs as there are up-up and down-down pairs, every combination has a chance of one in four. In case of noise perturbations, up-down and down-up pairs (platform-like break pairs) are more likely than up-up and down-down pairs. The latter is what we found in the real datasets. Although there is a small deviation that suggests a small random walk contribution, but that may also be because the inhomogeneities cause a trend bias.

3. Signal to noise ratio varies regionally

The HOME benchmark reproduced a typical situation in Europe (the USA is similar). However, the station density in much of the world is lower. Inhomogeneities are detected and corrected by comparing a candidate station to neighbouring ones. When the station density is less, this difference signal is more noisy and this makes homogenization more difficult. Thus one would expect that the performance of homogenization methods is lower in other regions. Although, also the break frequency and break size may be different.

Thus to estimate how large the influence of the remaining inhomogeneities can be on the global mean temperature, we need to study the performance of homogenization algorithms in a wider range of situations. Also for the intercomparison of homogenization methods (the more limited aim of HOME) the signal (break size) to noise ratio is important. Domonkos (2013) showed that the ranking of various algorithms depends on the signal to noise ratio. Ralf Lindau and I have just submitted a manuscript that shows that for low signal to noise ratios, the multiple breakpoint method PRODIGE is not much better in detecting breaks than a method that would "detect" random breaks, while it works fine for higher signal to noise ratios. Other methods may also be affected, but possibly not in the same amount. More on that later.

4. Regional trends (absolute homogenization)

The initially simulated data did not have a trend, thus we explicitly added a trend to all stations to give the data a regional climate change signal. This trend could be both upward or downward, just to check whether homogenization methods might have problems with downward trends, which are not typical of daily operations. They do not.

Had we inserted a simple linear trend in the HOME benchmark data, the operators of the manual homogenization could have theoretically used this information to improve their performance. If the trend is not linear, there are apparently still inhomogeneities in the data. We wanted to keep the operators in the blind. Consequently, we inserted a rather complicated and variable nonlinear trend in the dataset.

As already noted in the paper, this may have handicapped the participating absolute homogenization method. Homogenization methods used in climate are normally relative ones. These methods compare a station to its neighbours, both have the same regional climate signal, which is thus removed and not important. Absolute methods do not use the information from the neighbours; these methods have to make assumptions about the variability of the real regional climate signal. Absolute methods have problems with gradual inhomogeneities and are less sensitive and are therefore not used much.

If absolute methods are participating in future studies, the trend should be modelled more realistically. When benchmarking only automatic homogenization methods (no operator) an easier trend should be no problem.

5. Length of the series

The station networks simulated in HOME were all one century long, part of the stations were shorter because we also simulated the build up of the network during the first 25 years. We recently found that criterion for the optimal number of break inhomogeneities used by one of the best homogenization methods (PRODIGE) does not have the right dependence on the number of data points (Lindau and Venema, 2013). For climate datasets that are about a century long, the criterion is quite good, but for much longer or shorter datasets there are deviations. This illustrates that the length of the datasets is also important and that it is important for benchmarking that the data availability is the same as in real datasets.

Another reason why it is important that the benchmark data availability to be the same as in the real dataset is that this makes the comparison of the inhomogeneities found in the real data and in the benchmark more straightforward. This comparison is important to make future validation studies more accurate.

6. Non-climatic trend bias

The inhomogeneities we inserted in HOME were on average zero. For the stations this still results in clear non-climatic trend errors because you only average over a small number of inhomogeneities. For the full networks the number of inhomogeneities is larger and the non-climatic trend error thus very small. It was consequently very hard for the homogenization methods to improve this small errors. It is expected that in real raw datasets there is a larger non-climatic error. Globally the non-climatic trend will be relatively small, but within one network, where the stations experienced similar (technological and organisational) changes, it can be appreciable. Thus we should model such a non-climatic trend bias explicitly in future.

International Surface Temperature Initiative

The last five problems will be solved in the International Surface Temperature Initiative (ISTI) benchmark . Whether a two-phase homogenization method will participate is beyond our control. We do expect less participants than in HOME because for such a huge global dataset, the homogenization methods will need to be able to run automatically and unsupervised.

The standard break sizes will be made smaller. We will make ten benchmarking "worlds" with different kinds of inserted inhomogeneities and will also vary the size and number of the inhomogeneities. Because the ISTI benchmarks will mirror the real data holdings of the ISTI, the station density and the length of the data will be the same. The regional climate signal will be derived from a global circulation models and absolute methods could thus participate. Finally, we will introduce a clear non-climate trend bias to several of the benchmark "worlds".

The paper on the ISTI benchmark is open for discussions at the journal Geoscientific Instrumentation, Methods and Data Systems. Please find the abstract below.

Abstract.
The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global scale synthetic analogs to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real world data do not afford us). Hence algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.


Related reading

Nick Stokes made a beautiful visualization of the raw temperature data in the ISTI database. Homogenized data where non-climatic trends have been removed is unfortunately not yet available, that will be released together with the results of the benchmark.

New article: Benchmarking homogenisation algorithms for monthly data. The post describing the HOME benchmarking article.

New article on the multiple breakpoint problem in homogenization. Most work in statistics is about data with just one break inhomogeneity (change point). In climate there are typically more breaks. Methods designed for multiple breakpoints are more accurate.

Part 1 of a series on Five statistically interesting problems in homogenization.


References

Domonkos, P., 2013: Efficiencies of Inhomogeneity-Detection Algorithms: Comparison of Different Detection Methods and Efficiency Measures. Journal of Climatology, Art. ID 390945, doi: 10.1155/2013/390945.

Lindau and Venema, 2013: On the multiple breakpoint problem and the number of significant breaks in homogenization of climate records. Idojaras, Quarterly Journal of the Hungarian Meteorological Service, 117, No. 1, pp. 1-34. See also my post: New article on the multiple breakpoint problem in homogenization.

Lindau and Venema, to be submitted, 2014: The joint influence of break and noise variance on the break detection capability in time series homogenization.

Willett, K., Williams, C., Jolliffe, I., Lund, R., Alexander, L., Brönniman, S., Vincent, L. A., Easterbrook, S., Venema, V., Berry, D., Warren, R., Lopardo, G., Auchmann, R., Aguilar, E., Menne, M., Gallagher, C., Hausfather, Z., Thorarinsdottir, T., and Thorne, P. W.: Concepts for benchmarking of homogenisation algorithm performance on the global scale, Geosci. Instrum. Method. Data Syst. Discuss., 4, 235-270, doi: 10.5194/gid-4-235-2014, 2014.

Thursday, 26 June 2014

Open post-publication review is no substitute for pre-publication review

We have submitted a new paper. It describes how we are planning to validate the performance of homogenization methods that remove non-climatic effects from station climate data.

The way scientists write, the paper neutrally describes the new plans. For people that know the relevant scientific literature that is naturally also a critique of how we did it before, for them there is no need to spell this out and rub salt in the wounds. However, people that do not know the literature may get the impression that there are no disputes in science. Being an author on both papers and being first author of the old one, I hope I am allowed to break a little with the scientific culture and plainly describe the problems in my next post.

You could call this post-publication peer review of a scientific article. The climate dissenter may call it blog review. Post-publication review seems to be on many people's minds lately. My guess would be that this is stimulated by the increasing importance of digital publishing and social media, which make new procedures thinkable.

The most common procedure in science is that subsequent improved articles take care of problems found in published articles. It is also possible to write a so-called comment on an article, a short article that only focusses on the problems of the published paper. Being rather explicit, this is not a great way to make friends and is not used much. The authors themselves can also publish a correction or retract their articles. These procedures are all quite heavy; also these texts typically involve peer review and are printed in the journal. Because of this it is quite hard to get a comment published. So they say, I have never tried.

It may be possible to do post-publication review more loosely in the digital age. Although, while the limitation is no longer the cost of printing and shipping paper around the world, an important limitation is still the time of the reader. The problem is not getting published, but getting read (by the right people). Thus maintaining a certain quality level is still important. If it weren't we could dump the journals and all just read blogs. Does not sound like a good idea to me.

The post-publication review could be similar to the pre-publication open peer review that the European Geophysical Union (EGU) uses for some of its journals. Unfortunately, these journal do not keep the discussion open after publication. Furthermore, except for the official reviewers, the people have to sign their comments. While I understand why the editors prefer this, it reduces the number of low quality comments they have to read and moderate, I feel that also anonymous comments should be possible. Not every paper author deals with criticisms professionally.

Facilitate review after publication

Another nice example is the journal PLOS ONE. The Public Library Of Science, PLOS, is a pioneer in open-access publishing in the medical sciences. In PLOS ONE everyone can publish and the review is only for the technical correctness of the manuscript, not for its importance or impact. As far as I can judge this type of review would not be a big difference for the atmospheric sciences. I can only remember one or two manuscripts were I wrote the editor that it is a rather small incremental improvement to the literature. In almost all cases manuscripts are rejected for technical problems.

How important the expected impact of a paper is in the review may be different in other fields, economists often talk about how hard it is to get into certain journals and naturally getting published in Science or Nature is hard. In the atmospheric sciences the differences in Impact Factor between the journals are modest.

PLOS ONE performs a post-publication review by having facilities to add comments and by linking to (news) articles and blog posts that mention the PLOS ONE article. A paper that is unsurprisingly shared a lot on twitter and facebook is: Facebook Use Predicts Declines in Subjective Well-Being in Young Adults.

Review only after publication

The next level of escalation would be no peer review in advance, publish anything and only comment on the articles afterwards. This is advocated in the essay: Open Peer Review to Save the World by Philip Gibbs. I think he is serious, but this surely is an overestimation of the importance of peer review. Gibbs had manuscripts rejected because he has no academic affiliation. That is something that must not to happen. Period. Clearly there are many problems with peer review. However, this alternative model is very similar to the blogosphere and we see what kind of quality that produces.

The limitation for scientific progress is not the number of potential interesting ideas, it is the build up of reliable knowledge. Not having a peer review before publication could backfire for speciality topics and for unknown authors; such papers need the review to obtain the initial credibility to get people to take the idea seriously. You already see in the EGU open review that mostly only the assigned reviewers give their opinion and that reviews by others is rare. With the large number of scientists today, people working on projects and often changing topic and the importance of interdisciplinary research, I do not think that a return to personal credibility to judge if a paper is worth reading would be beneficial for science. Only the papers written or recommended by a hand full of well-known people would be taken seriously and the rest would struggle harder to get people to invest their time to read them.

I would argue that some selection for manuscripts and comments is important to keep quality standards. However, for authors wanting to warn their readers of shortcomings of their papers, peer review does not seem that important. I will do so in my next post.



Related reading

Open Scholar wants to separate the two powers of journals: peer review (evaluation) and publishing. Could be interesting. Publishing is a near monopoly (as seen in the monopoly profits of 30 to 40%). Professional review organisation may do the job better and compete for a good reputation. The open review journals suggest that the openness improves the quality of the first draft manuscript and the reviews.

Related posts

Peer review helps fringe ideas gain credibility

Three cheers for gatekeeping

The value of peer review for science and the press

Against review - Against anonymous peer review of scientific articles

Global Warming Solved in Open Peer Review Journal

Some blog reviews

Reviews of the IPCC review

Blog review of the Watts et al. (2012) manuscript on surface temperature trends

Investigation of methods for hydroclimatic data homogenization

Sunday, 22 June 2014

Five reasons scientists do not like the consensus on climate change

Paris 2010 - Le Penseur.jpg
There is a consensus among climate scientists that the Earth is warming, that this is mainly because of us and that it will thus continue if we do nothing. While any mainstream scientist will be able to confirm the existence of this consensus from experience, explicitly communicating this is uncomfortable to some of them. Especially in the clear way The Consensus Project does. I also feel this disease, so let me try to explain why.

1. Fuzzy definition

One reason is that the consensus is hard to define. To the above informal statement I could have added, that greenhouse gasses warm the Earth's surface, that CO2 is a greenhouse gas, that the increase in the atmospheric CO2 concentration is mainly due to human causes, and so on. That would not have changed much and also the fraction of scientists supporting this new definition would be about the same.

You could probably also add some consequences, such as sea level rise or stronger precipitation, without much changes. However, if you would start to quantify and ask about a certain range for the climate sensitivity or add some consequences that are harder to predict, such as more drought, stronger extreme precipitation, the consensus will likely become smaller, especially as more and more scientists will feel unable to answer with confidence.

Whether there is a consensus on X or not is a question about humans. Such social science questions will always be more fuzzy as questions in the natural sciences. I guess we will just have to life with that. Just because concepts are a bit fuzzy, does not mean that it does not make sense to talk about them. If you think some aspect of this fuzziness creates problems, you can do the research to show this.

2. Scientific culture

By defining a consensus and by quantifying its support, you create two groups of scientists, mainstream and fringe. This does not fit to the culture in the scientific community to keep communication channels open to all scientists and not to exclude anyone.

Naturally, also in science, as a human enterprise, you have coalitions, but we do our best to diffuse them and even in the worst case, there are normally people on speaking terms with multiple coalitions.

Also without its quantification, the consensus exists. Thus communicating it does not make that much difference. The best antidote is for scientists to do their best to keep the lines of communication open. A colleague of mine who does great work on the homogenization thinks global warming is a NATO conspiracy. My previous boss was a climate "sceptic". Both nice people and being scientists they are able to talk about their dissent in a friendlier tone as WUWT and Co.

3. Evidence

Many people, and maybe also some scientists, may confuse consensus with evidence. For a scientist referring to a consensus is not an option in his own area of expertise. Saying "everyone believes this" is not a scientific argument.

Consensus does provide some guidance and signal credibility, especially on topics where it is easily possible to test an idea. If I had a new idea and it would require an exceptionally high or low amount of future sea level rise, I would probably not worry too much as there is not much consensus yet on these predictions and I would read this literature and see if it is possible to make matters fit somehow. If my new idea would require the greenhouse effect to be wrong, I would first try to find the error in my idea, given the strong consensus, the straight forward physics and clear experimental confirmation it would be very surprising if the greenhouse theory would be wrong.

For scientists or interested people knowing there is a consensus is not enough. Fortunately, in the climate sciences the evidence is summarised every well in the IPCC reports.

The weight of the evidence clearly matters: The consensus in the nutritional sciences seems to be that you need to move more and eat less, especially eat less fat, to lose weight. As far as I can judge this is based on rather weak evidence. Finding hard evidence on nutrition is difficult, human bodies are highly complex, finding physical mechanisms is thus nearly impossible. The bodies of ice bears (eating lots of fat), lions (eating lots of protein) and gazelles (eating lots of carbs) are very similar. They all have arteries and the ice bears arteries do not get clogged by fat; they all have kidneys and the lions kidneys can process the protein and their bones do not melt away; they all have insulin, but the gazelles do not get diabetes or obesity from all those carbs. Traditional humans ate a similar range of diets without the chronic deceases we have seen the last generations. Also experiments with humans are difficult, especially when it comes to chronic decease where experiments would have to run over generations. Most findings on diet are thus based on observational studies, which can generate interesting hypotheses, but little hard evidence. It would be great if the nutritional sciences also wrote an IPCC-like report.

For a normal person, I find it completely acceptable to say, I hold this view because most of the worlds scientists agree. I did so for a long time on diet, while I now found that the standard approach does not work for me, I feel it was rational to listen to the experts as long as I did not study the topic myself. It is impossible to be an expert for every topic. In such cases the scientific consensus is a good guiding light and communicating it is valuable, especially if a large part of the population claims not to be aware of it.

4. Contrarians

The concept "consensus" is in itself uncomfortable to many scientists. Most of us are natural contrarians and our job is to make the next consensus, not to defend the old one. Even if our studies end up validating a theory, the hope and aim of a validation study is to find an interesting deviation, that may be he beginning of a new understanding.

Given this mindset and these aims, many scientists may not notice the value of consensus theories and methods. They are what we learn during our studies. When we read scientific articles we notice on which topics there is consensus and on which there is not. When you do something new, you cannot change everything at once. Ideally a new work can be woven into the network of the other consensus ideas to become the new consensus. If this is not possible yet, there will likely be a period without consensus on that topic. If there is no consensus on a certain topic, that is a clear indication that there is work to do (if the topic is important).

5. Scientific literature

A final aspect that could be troubling is that the consensus studies were published in the scientific literature. It is a good principle to keep the political climate "debate" out of science and thus out of the scientific literature as well as possible. It is hard enough to do so. Climate dissenters regularly game the system and try to get their stuff published in the scientific literature. Peer review is not perfect and some bad manuscripts can unfortunately slip through.

One could see the publication of a consensus study as a similar attempt to exploit the scientific literature. Given that all climate scientists are already aware of the consensus, such a study does not seem to be a scientific urgency. Furthermore, Dana Nuccitelli acknowledged that one of the many aims was to make "the public more aware of the consensus".

However, many social scientists do not seem to be aware of the consensus and feel justified to see blogs such as WUWT as a contribution to a scientific debate, rather than as the political blog it is, that only pretends to be about science. One of the first consensus studies was even published in the prestigious broadly read journal Science. Replications of such a study, especially if done in another or better way seem worth publishing. The large difference in the perception of the consensus on climate change between the public and climate scientists is worth studying and these consensus studies provide an important data point to estimate this difference.

Just because the result sounds like a no-brainer is no reason not to study this and confirm the idea. Not too long ago a German newspaper reported on a study whether eating breakfast was good for weight loss. A large fraction of the comments were furious that such an obvious result had been studied with public money. I must admit, that I no longer know whether the obvious result was that if you do not eat breakfast (like Italians) you eat less and thus lose weight or whether people that eat breakfast (like Germans) are less hungry and thus compensate this by eating less during the rest of the day. I think, they did find an effect, thus the obvious result was not that it naturally does not matter when you eat.

As a natural scientist, it is hard for me to judge how much these studies contribute to the social sciences. That should be the criterion. Whether an additional aim is to educate the public seems irrelevant to me. The papers were published in journals with a broad range of topics. If there were no interest from the social science, I would prefer to write up these studies in a normal report, just like an Gallop poll. However, my estimate as outsider would be that these paper are scientifically interesting for the social sciences.

Outside of science

An important political strategy to delay action on climate is to claim that the science is not settled, that there is no consensus yet. The infamous Luntz memo from 2002 to the US Republican president stated:

Voters believe that there is no consensus about global warming within the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate
This is important because the population places much trust in science. Thus holding that trust and the view that there is no climate change must produce considerable cognitive dissonance.

There is a consensus within the Tea Party Conservatives that human caused climate change does not exist. It is naturally inconvenient for them that this is wrong. However, I did not make up this escapist ideology. Thus for me as a scientist this is not reason to lie about the existence of a clear consensus about and strong evidence for the basics of climate change. Even if that were a bad communication strategy, which I do not believe, my role as a scientist is to speak the truth.

What do you think? Did I miss any reason why a scientist might not like the consensus concept? Or an argument why these reasons are weak if you think about it a bit longer? I will not post comments with flimsy evidence against The Consensus Project. You can do that elsewhere where people are more tolerant and already know the counter arguments by heart.



Related reading

In case you do not like people judging abstracts, there are also surveys of the opinion of climate scientists. For example this survey by the people behind the Klimazwiebel.

Andy Skuce responds to critique of consensus study in his post: Consensus, Criticism, Communication and gives a nice overview of the various possible critiques and why they do not hold water.

On consensus and dissent in science - consensus signals credibility


Photo: „Paris 2010 - Le Penseur“ by Daniel Stockman - Flickr: Paris 2010 Day 3 - 9. Licensed with CC BY-SA 2.0 via Wikimedia Commons.

Monday, 9 June 2014

My immature and neurotic fixation on WUWT

More neutral titles for this post could have been: "why do I blog about pseudosceptics?" or "how to play climateball(TM) for scientists".

Last week I wrote about the unchristian, indecent, ugly language at WUWT, in my post: The conservative family values of Christian man Anthony Watts. Just a sample: Pathetic whining, creature, sub-human, odious toads, evil, Hitler, Stalin and diseased narcissism (archive).

I just noticed that I had missed an insult by Anthony Watts himself, in his response to my request to remove a comment with the usual pun on my last name.
Looking at how often your cite WUWT in negative connotations, I’d say you have a fixation.
To fully appreciate the insult, you need to clicking on the link to Wikipedia. (Let's ignore the irony of Watts linking to Wikipedia in a post about how unreliable Wikipedia is and how the evil William M. Connolley single handedly turned Wikipedia into an alarmist CAWG propaganda tool. In other words, how Connolley as one of the editors and backed by the scientific literature kept their nonsense to a minimum.) Wikipedia writes about fixation (psychology) (archive):
Fixation is a concept originated by Sigmund Freud (1905) to denote the persistence of anachronistic sexual traits. ... More generally, it is the state in which an individual becomes obsessed with an attachment to another person, being, or object (in human psychology): "A strong attachment to a person or thing, especially such an attachment formed in childhood or infancy and manifested in immature or neurotic behavior that persists throughout life". ... Fixation to intangibles (i.e., ideas, ideologies, etc.) can also occur.
While minor compared to the language directed at Connolley and Mann, that is not a very nice thing to say. I would see it as an indication for a rather modest willingness to engage in a constructive dialogue to improve mutual understanding.

I guess a fitting reply would be: projection. Also part of Wikipedia's coverage of psychology.
Psychological projection is the act or technique of defending oneself against unpleasant impulses by denying their existence in oneself, while attributing them to others. ... Although rooted in early developmental stages, and classed by George Eman Vaillant as an immature defence, the projection of one's negative qualities onto others on a small scale is nevertheless a common process in everyday life.
Climateball is hard to sustain if you are not having fun. Are we even now? #kindergarten

Had Mr. Watt chosen a nicer term, he would have been partially right. Let me try to explain in this post why I blog and comment on pseudo-sceptics and especially on WUWT. This post will finish with some ideas on how to do so effectively.

I like using the term WUWT & Co. for blogs that spread misinformation about climate science. It is a neutral and clear alternative to "denier blogs", which the speudo-skeptics claim points to holocaust deniers. It does not, but one should not give them too much opportunity to change the topic.

A reason to find WUWT somewhat interesting is that the pet topic of Mr. Watts is the quality of weather stations. That is how I got introduced to the man. After writing a paper on the homogenization methods to remove non-climatic changes from historical instrumental data, I wrote a blog post about this. Knowing that Roger Pielke Sr. was also interested in that topic, I asked if Pielke was willing to repost it. He referred me to Watts, who asked me for permission to repost it. He probably thought I was okay, because of Pielke. After reading that homogenization improves temperature trend estimates, he never published it. You have to set priorities.

The main reason for my interest, however, is probably my personality. I like to understand how things work. I like civilised debate. I like reason. Hearing or inventing a new strong argument is a joy, similar to the joy of listening or making music. When I hear a claim, no matter how much I like the person or claim, my brain automatically starts producing counter arguments. This is a very effective way to annoy people, my apologies to all my friends for that.

That is who I am, that is why I became a scientist. I also believe that reason, civilised debate and the power of arguments are what have given us the rule of law, democracy, human right and prosperity. They are the foundation of our open societies and they are what WUWT and Co. are destroying in their political battle against science.

Knowing a little about climate and knowing how science works, it is obvious to me how wrong most of the WUWT posts are. It would be hard for me not to refute this nonsense. As WUWT is the biggest blog of this community and can be seen as its mainstream, it seems to make sense to give it more attention as even more extremist blogs that not even pseudo-sceptics take seriously.

Creationism is even more irrational. The evidence for evolution is even stronger as for climate change, by orders of magnitude stronger. However, that is a local problem the Americans have to deal with. The misinformation of WUWT and Co. affects all. Climate change is certainly not the only important global problem, but a solvable one.

If people would decide that it is better to suffer the consequences as to solve the problem, so be it, that is democracy or as Jac. commented at AndThenTheresPhysics:
In a democracy, you have to respect if the people’s consent is that they will accept climate change with all its consequences to happen; but at least let scientists make sure it is an ‘informed consent’ then.
That quote is the end of an interesting discussion at AndThenTheresPhysics. A discussion about the value of refuting climate "sceptics" and how scientists can contribute. As many people do not read comments, I would like to summarise this discussion below.

I have to admit to like reading comments (and call in radio), especially at AndThenTheresPhysics. Most comments are not informative, but you have the chance of reading or hearing something you might otherwise not hear in the mainstream media.

Others disagree and like comments less.

Click on the time stamp to see the comments on it.

The not yet very interesting opening gambit at AndThenTheresPhysics (ATTP) was by Mike Fayette.
So why not find common ground with the skeptics and actually try to get something useful done? .. Mock the folks that exaggerate the threat the same way you mock the folks who deny basic physics.
The simple answer to the first part is: you can try to find common ground about political problems. Climate "sceptics" hinder this political discussion by refusing to talk about politics and claim problems with solid science. Unfortunately, you cannot negotiate with nature. Reality simply is.

The answer to the second part is that it is rare that people who worry about climate change make claims that are clearly untenable. Reality is sufficiently scary and is for them more then enough reason to act. Furthermore, climate change is a wicked problem with a lot of uncertainty and especially on the warm side it is hard to exclude much. Still, if people get too warm and fuzzy about climate change, I naturally do correct them.

My favorite future blogger, Mark Ryan, replies:
Mike Fayette’s comments, and his experience, are very interesting, [If someone at a scientific conference start a reply this friendly, expect a nasty comment or question.] and make me think about the problem of how scientific communities relate to evidence, compared with how the public –and particularly the political communities in the public- relate to it.

There is a lot of confusion about the fact that knowledge is fundamentally a social property –no individual can claim decisive knowledge across a domain (actually, some individuals obviously do, but they’re invariably wrong). What happens instead is that individuals [scientists] build on what they understand to be established knowledge, do their work, and add it to the constellation of previous and contemporary contributions.

Some elements within the knowledge constellation are much better established than others, and are therefore more likely to be true –with the well-worn caveat that no question is ever 100% closed. But this caveat is actually much more trivial than those who misunderstand it would have us believe; this point was never made better than in this short essay by Isaac Asimov. This is the best response I can think of to Mike’s earlier remark about competing theories in science.

There must be a hierarchy of knowledge for scientific knowledge to be possible. Core ideas support the contingent or peripheral ideas, otherwise every researcher’s work would arbitrarily re-establish first principles. ... Almost all research deals with anomalies or minor controversies, but based on established foundations; if someone wants to remake the foundations, they quite rightly find it hard going. ...

We have had over four decades now of a constant conflation of politics and science, a response to the culture of scientistic authority promoted in mid 20th century, and the new kinds of health and environmental risks that modern life has created. The net result is that complex and specialised knowledge is counterposed to commonsense and intuitive, easy to relate to, (but incorrect) alternatives. This is the “better story” that Mike mentioned, and large percentages of the public just buy into this without a second thought, because they are now conditioned to look straight past specialist scientific knowledge to project political motives onto the people making it. For people who buy into this politicisation of science, there is no need to educate themselves to understand the complex theories and jargon of the scientists, because in any case, they imagine scientists use facts the way we all see lawyers use facts in various media. It does not occur to them that the simple explanation has already been considered and improved on by the people who study the topic. ...
A clear example of such a simple explanation is the meme going round that CO2 is heavier than air, will thus stay close to the ground and cannot act like a greenhouse gas. (This ignores turbulent mixing.) Do people really think that no scientist in all these decades has ever tried to measure up to which height CO2 is a well mixed gas? It is fine to ask such a question. It is insane to immediately claim to have refuted the greenhouse effect.

Mark Ryan continues to explain that blogging about science makes sense:
ATTP, you said in your post that you were going through a phase of wondering what the whole point is.

It is a few years back now, but I started reading blogs like this one as a skeptic. My training is in politics and the philosophy of science, which at least gave me some basis for spotting patterns in the literature I was reading. My interest in climate came from my interest in the philosophy of statistics, but I had read social theorists like Thomas Kuhn, Harry Collins and Michel Foucault, and was predisposed to a very political take on the production of scientific knowledge.

Not having the appropriate scientific background, I needed to visit sites like this – at the time it was Tamino, Skeptical Science, Real Climate[,] etc, just so I could understand what I was looking at when I tackled even things like IPCC papers. Eventually, the most striking pattern I found in the so-called ‘mainstream’ climate literature was a constellation of arguments converging towards consilience [consilience refers to the principle that evidence from independent, unrelated sources can "converge" to strong conclusions], and a rigorous commitment to explaining the science. I didn’t find sites like Joe Romm’s very helpful, by contrast.

On the so-called “skeptical” sites, and in the small amount of scholarly literature, the pattern is negative -mutually contradictory arguments. This body of literature was not converging to an alternative, but was fixated on driving wedges into any cracks of uncertainty they could find. It was the comparative ‘shapes’ of the two different bodies of argument that convinced me.

I want to say I think what you do is tremendously valuable; it is clear, articulate, and sets a tone that encourages skeptical people, like the one I used to be, to stay with you. In this intensely polarised environment, that is a delicate act to pull off, but if I was running a blog, you would be one of my models. It is one of the unfortunate things about blogging, that you send your missives out into the void and never quite know whether you’re making any difference -it’s a kind of alienated form of social being, in a way. But you create a rare environment here, so well done.
Jac. had a similar experience as Mark Ryan and added:
... I am not a scientist. I am working in the legal/judicial system. The number of climate change cases that are brought to the courts is growing, and so is the body of literature about climate change liability that I am especially interested in. I think I am quite well informed on the legal liability aspects of climate change and the potential role of the judiciary. I started reading off and on some blogs about climate change some months ago, because I wanted to try to understand some of the science as well, and I also wanted to learn and understand about the way scientists and skeptics interact and discuss about their arguments and what these arguments are.

So my background and reason to start reading this blog seems to be somewhat similar to Mark Ryan. I completely agree with what he wrote (23/5, 1.30 pm) about the ‘shape of the two bodies of arguments’. I made the same observations, and arrived at the same conclusion.

I also noticed that generally speaking there is a difference in ‘tone’ and ‘style’ in the way the scientists argue and the way ‘skeptics’ argue.

Typically, the question of the scientist is one out of curiosity, whereas the questions of the skeptic are typically more like an aggressive cross-examination. Also typically the skeptic is not satisfied with the answers he gets; there is always another question following, never mind if it is coming from quite a different perspective, or he just changes the subject or disappears. Therefore, in my perception the typical skeptic is not interested in finding common ground with the scientist; he is on a ‘fishing expedition’ to see if there are any contrarian arguments that cannot easily be discarded by scientists, so he can claim that the science is far from settled and too uncertain for political decisions.

So my conclusion is that the skeptics in the blogosphere are not genuinely interested in (the advancement of) climate science.

If that analysis is correct, scientists have little if anything to win in engaging in discussions with skeptics on scientific issues because the skeptic has nothing to offer there and has a different agenda altogether. I am not at all surprised then that for scientists, discussions with skeptics can be irritating and tiresome. I assume that is what ATTP meant when he started this post.

For me these discussions are not pointless. For me, seeing how the arguments flow was helpful in understanding the climate debate. Like Tucholsky said: the understanding that the people have is usually wrong, but in their sensing the people are usually right. This blog has been guiding my ‘sensing’ of the climate debate and who is right probably just as much as it has been guiding my understanding of the arguments. ...

I wonder what it would be like if scientists would not engage in discussions with skeptics with the intention of convincing them – they won’t allow you to – but with the intention to demonstrate to other lurking readers (like me) that science has better (and more polite) answers and deeper understanding to offer than the skeptics have. It might turn out to be a whole other kind of ballgame, one that is far less frustrating for scientists.

And if you don’t feel like playing anymore, I think it would be perfectly OK to say ‘we have tried to explain you the science more than once, but either you seem not able to understand the science which is regrettable, or you just do not want to understand which is fine, but either way and with all due respect you have offered nothing to this discussion that has any merits and you and your repetitive comments are becoming a bit of a boring noise, so thank you for participating, but we will block you from this post / this blog.’ I think scientists could be a little more assertive about sticking to the rules of their discussions.
Our climate philosopher, Willard, got scared:
No more ClimateBall ™ ?
http://www.khaaan.com/ [Warning sound]
Jac. could reassure him:
Still Climate Ball I suppose, but how scientists want to play it. If scientists start perceiving (and thus expecting) that the game is not about trying to find common scientific ground with the skeptics or about advancement of science, but about proving how wrong/mistaken the skeptics are (while still maintaining the cool, rational, unbiased and open-minded, fact-based balanced way of truly scientific reasoning that, in my view, really is the stronghold of scientists that earns them credibility), scientists might find it less frustrating to be playing the game. In this other version of Climate Ball moving the goalposts is considered as acknowledging you have lost the previous argument. Don’t complain about moving the goalposts, but instead explicitly claim it as victory and as soliciting for another beating on another subject.

My selfish reason for suggesting this is that I would not want the scientists getting so frustrated that they are pulling out of the debate.
AndThenTheresPhysics regular BBD followed the same route and wrote previously:
FoxGoose, it’s an open secret that I used to be a fake sceptic. At one time, it was something of a USP [Unique Selling Point], even. Quote mining my past is an old, tired tactic. It also reveals something rather unpleasant about those doing it.

But to answer the question:

- I’ve learned more than you in the last three years.

- I’ve demonstrated that I am intellectually honest enough to overcome my denial.

- I’ve got the balls to keep the same screen name and own every statement I’ve ever made in public using it.

- Once I discover that I have been lied to and manipulated, I never forgive and I never forget.
Rachel naturally asked: "Wow, BBD. What made you change your mind?"
I discovered that I was being lied to. This simply by comparing the “sceptic” narrative with the standard version. Unlike my fellow “sceptics” I was still just barely sceptical enough (though sunk in denial) to check both versions. Once I realised what was going on, that was the end of BBD the lukewarmer (NB: I was never so far gone as to deny the basic physics, only to pretend that S [the climate sensitivity] was very low). All horribly embarrassing now, of course, but you live and learn. Or at least, some of us do. ...

Always check. Fail to do this in business and you will end up bankrupt and in the courts. I failed to check, at least initially, and made a colossal prat out of myself. Oh, and never underestimate the power of denial (aka ‘wishful thinking’). It’s brought down better people than me. ...

There wasn’t a single, defining eureka moment, just a growing sense of unease because nothing seemed to add up. ... Once I eventually started to compare WUWT [Watts Up With That] with RC [RealClimate] and SkS [Skeptical Science], that was it, really.
Thus maybe the information deficit model is not that bad. At least when people have to time to gather all information, hear all sides and think it over. Thinking deficit model might be a better name. How do we get people to start thinking? One way would be to reduce the vitriol in posts about science, this reduces critical thinking and strengthens tribal thinking. (Hard to do, the dramatic opening helped to get you to read until here.)

It also points to the importance of trust. Being lied to is not nice. In that respect I would not expect BBD to ever go back to the climate "sceptics". If BBD detects an inconsistency, I would expect that he would simply point it out to scientists. The way scientists do. If the evidence changes, you will hear it first from scientists.

Building up trust again will be hard for the pseudo-sceptics after having displayed how untrustworthy they are. But it would help if they would stop their disinformation campaign against science, stop repeating their completely idiotic talking points, and would start to make scientifically valid points about real uncertainties and weaknesses. They would be welcomed back home. Unfortunately, that is somehow a huge if and I do not expect this to ever happen, just to see the group get smaller and smaller, being laughed at by their neighbours and die out.

Steve Bloom wondered:
Jac, as I’m sure you know, most scientists, even climate scientists, choose not to play [Climateball] at all. But is it helpful to imagine the response to this blog (and the climate science blogosphere generally) of such a non-player who is considering starting to play? Is the lesson that other forms of engagement and outreach (e.g. reaching out to their local media and giving community talks) are a better use of their time? Or maybe it’s most effective to instead focus their research efforts onto things that will inform a better policy direction?
Many scientists are introverted or otherwise not interested in a public debate. That is fine. As a community we should be present, but people should do what they do best. Most "challenges" by pseudo-sceptics are so basic and repetitive that many lay people following the climate "debate" may well be better suited to reply.

Outreach will not help you avoiding the pseudo-sceptics. They will be the ones motivated to ask the questions. Expect some creative and weird ones. But eye to eye even pseudo-sceptics know how to behave. Youtube suggests that the main exception to that rule is Lord Monckton. (Highly recommended funny video about the comedian behind Monckton).

To close, Willard summarised the climate "debate" as:
ClimateBall™ can be fun! Stripped down to its bare essentials, ClimateBall ™ is just a conversation disguised as a scientific discussion.
More seriously, Jac. closed with the main purpose of the "debate":
In a democracy, you have to respect if the people’s consent is that they will accept climate change with all its consequences to happen; but at least let scientists make sure it is an ‘informed consent’ then.
What did I learn about the climate "debate" from the above comments?

1. Do not expect to be able to convince the people that have being a climate sceptics as their identity. Explain the science and explain why the climate "sceptics" are wrong for the lurkers. Explain why science is fun and why it produces reliable knowledge. Show you are open and interested in a better understanding.

2. Stay on topic to be able to go in depth like scientists would if they have a dispute. Pseudo-sceptics like to change topics before acknowledging that they were wrong on this first one. Make this strategy clear to the lurkers and explain that this suggests that the pseudo-sceptics are not really interested in understanding the problem. If necessary "answer" new topics with links (e.g. to the Skeptical Science list of Global Warming & Climate Change Myths).

3. Be friendly to people you do not know and might be honestly interested in the answer. There is no need to accept any kind of abuse, but try to make sure that there is a clear difference in tone between science and non-science.

4. Search for the name of your discussion partner and the topic. Very often he has discussed the topic before somewhere else and already knows all the answers. In this case, point out to the lurkers that your discussion partner is not interested in the answer, but just wants to create doubt.

5. Be fairly strict with moderation on your own blog, if you have one. The ugly language at WUWT we started this post with is great for stroking tribal feeling and very effective in reducing people's ability to think rationally. Not something you would like to see at a science blog. Personally, I also remove a large part of the comments without arguments, they waste the time of the reader looking for a real discussion. If you do this, you will only have to remove few comments, because people will adjust their tone.

6. Another function of the ugly language may be to discourage scientists from taking part. Try to ignore the misconduct and not to take it personally. These people do not know you and are only demonstrating their own problems. This is clearly illustrated by the hilarious puns on my last name. If those people would know me only a little bit, they would at least write something about homogenization or about my fixation with WUWT.

Try to have fun playing climateball and keep your eye on the WUWT ball.

[UPDATE. Maybe this and the previous post worked. Anthony Watts created an Open thread and asked what could we do better? He gives some suggestions himself.
3. I’d like less name calling. The temptation is great, and I myself sometimes fall victim to that temptation. I’ll do better to lead by example in any comments I make.
4. I’d like to see less trolling and more constructive commentary. One way to achieve that is to pay attention.
Let's see, how this works out. I am somewhat sceptical because they need the tribal atmosphere to suppress critical thinking. ]

[UPDATE: Just found an old post on Skeptical Science, Understanding climate denial, that mentions three more people that changed their mind based on the evidence, D.R. Tucker, Craig Good & Nathan McKaskle. It is not impossible, but probably still rare. Ironically Nathan McKaskle used to be a blogger, but gave up not knowing whether he had changed a single mind. That is a pity, changing people's minds does not happen immediately after reading an article, it takes time, if only to be able to say that one has always held the new position.]




Related reading

The “Nasty Effect:” Online Incivility and Risk Perceptions of Emerging Technologies by Anderson et al., in Journal of Computer-Mediated Communication

Andy Skuce tells how finding climate sceptics to be chronically wrong turned him from being a luck warmer that did not expect much to happen to an active member of the Skeptical Science crew.

The conservative family values of Christian man Anthony Watts

NoFollow: Do not give WUWT & Co. unintentional link love

Anthony Watts calls inhomogeneity in his web traffic a success

No trend in global water vapor, another WUWT fail

Blog review of the Watts et al. (2012) manuscript on surface temperature trends

Investigation of methods for hydroclimatic data homogenization


* Photo, Anthony Watts giving presentation in Australia, from Wikimedia commons. CC BY-SA 3.0 License.