Friday, 20 November 2015

Sad that for Lamar Smith "hiatus" has far-reaching policy implications

Earlier this year, NOAA made a new assessment of the surface temperature increase since 1880. Republican Congressman Lamar Smith, Chair of the House science committee did not like the adjustments NOAA made and started a harassment campaign. In the Washington Post he wrote about his conspiracy theory (my emphasis):
In June, NOAA employees altered temperature data to get politically correct results and then widely publicized their conclusions as refuting the nearly two-decade pause in climate change we have experienced. The agency refuses to reveal how those decisions were made. Congress has a constitutional responsibility to review actions by the executive branch that have far-reaching policy implications.
I guess everyone reading this blog knows that all the data and code are available online.

The debate is about this minor difference you see at the top right. Take your time. Look carefully. See it? The US mitigation sceptical movement has made the trend since the super El Nino year 1998 a major part of their argumentation that climate change is no problem. When for Lamar Smith such minute changes have "far-reaching policy implications", then maybe he is not a particularly good policy maker. The people he represents in the Texas TX-21 district deserve better.

I have explained the mitigation sceptics so many times that they should drop their "hiatus" fetish. That that would come back to hound them. That such extremely short term trends have huge uncertainties and that interpreting such changes as climatic changes assumes a data quality that I see as unrealistic. With their constant wailing about the quality data, they should theoretically certainly see it that way. But well, they did not listen.

Some political activists like to claim that the "hiatus" means that global warming has stopped. It seems like Lamar Smith is in this group, at least I see no other reason why he would think that it is policy relevant. But only 2 percent of global warming warms the atmosphere (most warms the oceans) and this "hiatus" is about 2% of the warming we have seen since 1880. It is thus a peculiar debate about 2% of %2 of the warming and not about global warming.

This peculiar political debate is the reason this NOAA study became a Science paper (Science magazine prefers article of general interest) and why NOAA's Karl et al. (2015) paper was heavily attacked by the mitigation sceptical movement.

Before this reassessment NOAA's trend since 1998 was rather low compared to the other datasets. The right panel of the figure below made by Zeke Hausfather shows the trends since 1998. In the figure the old NOAA assessment are the green dots, the new assessment the black dots.

The new assessment solved problems in the NOAA dataset that were already solved in the HadCRUTT4 dataset from the UK (red dots). The trends in HadCRUT4 are somewhat lower because it does not take the Arctic fully into account, where a lot of the warming in the last decade happened to have occurred. The version of HadCRUT4 were this problem is fixed is indicated as "Cowtan & Way" (brownish dots).

The privately funded Berkeley Earth) also takes the Arctic into account and already had somewhat larger recent trends.

Thus the new assessment of NOAA is in line with our current understanding. Given how minute this feature is, it is actually pretty amazing how similar the various assessments are.

"Karl raw" (open black circle) is the raw data of NOAA before any adjustments, the green curve in the graph at the top of this post. "Karl adj" (black dot) is the new assessment, the thick black line in the graph at the top. The previous assessment is "NCDA old" (green dot). The other dots, four well-known global temperature datasets.

Whether new assessments are seen as having "far-reaching policy implications" by Lamar Smith may also depend on the direction in which the trends change. Around the same time as the NOAA article, the Roy Spencer and Chris Christy published a new dataset with satellite estimates of the tropospheric temperatures. As David Appell reports, they make considerable changes to their dataset. Somehow I did not hear anything about a subpoena against them yet.

More important adjustments to the surface temperatures are made for data before 1940. Looking at the figure below, most would probably guess that Lamar Smith did not like the strong adjustments that made global warming a lot small. May he liked the direction better.

The adjustments before 1940 are necessary because in that period the dominant way to measure sea surface temperature was by taking a bucket of water out of the sea. During the measurement the water would cool due to evaporation. How large this adjustment should be is uncertain, anything between 0 and 0.4°C is possible. That makes a huge difference for the scientific assessment of how much warming we have seen up to now.

Also the size of the peak in the second world war is highly uncertain; the merchant ships were replaced by war ships, making the measurements differently.

This is outside of my current expertise, but the first article I read about this, a small study for the Baltic see, suggested that the cooling bias due to evaporation is small, but that there is a warming bias of 0.5°C because the thermometer is stored in the warm cabin and the sailors did not wait long enough until the thermometer equilibrates. Such uncertainties are important and a hand full of scientists are working on sea surface temperature. And now a political witch hunt keeps some of them from their work.

Whether the adjustments for buckets are 0.4 or 0°C that may be policy relevant. At least if we were already close to an optimal policy response. This adjustment affects the data over a long period and can thus influence estimates of climate sensitivity. What counts for the climate sensitivity is basically the area under the temperature graph. A change of 0.4°C over 60 years is a lot more than 0.2° over 15 years. Nic Lewis and Judith Curry (2014), who I hope Lamar Smith will trust, also do not see the "hiatus" as important for the climate sensitivity.

For those who still think that global warming has stopped, climatologist John Nielsen-Gammon (and friend of Anthony Watts of WUWT) made the wonderful plot below, that immediately helps you see, that most of the deviations from the trend line can be explained by variations in El Nino.

It is somewhat ironic that Lamar Smith claims that NOAA rushed the publication of their dataset. It would be more logical if he hastened his campaign. It is now shortly before the Paris climate conference and the strong El Nino does not bode well for his favourite policy justification as the plot below shows. You do not need statistics any more to be completely sure that there was no change in the trend in 1998.

Related reading

WOLF-PAC has a good plan to get money out of US politics. Let's first get rid of this weight vest before we run the century long climate change marathon.

Margaret Leinen, president of the American Geophysical Union (AGU): A Growing Threat to Academic Freedom

Keith Seitter, Executive Director of the American Meteorological Society (AMS): "The advancement of science depend
s on investigators having the freedom to carry out research objectively and without the fear of threats or intimidation whether or not their results are expedient or popular.

The article of Chris Mooney in the Washington Post is very similar to mine, but naturally better written and with more quotes: Even as Congress investigates the global warming ‘pause,’ actual temperatures are surging

Letters to the Editor of the Washington Post: Eroding trust in scientific research. The writer, a Republican, is chairman of the House Committee on Science, Space and Technology and represents Texas’s 21st District in the House.

House science panel demands more NOAA documents on climate paper

Michael Halpern of the Union of Concerned Scientists in The Guardian: The House Science Committee Chair is harassing US climate scientists

And Then There's Physics on the hypocrisy of Judith Curry: NOAA vs Lamar Smith.

Michael Tobis: House Science, Space, and Technology Committee vs National Oceanic and Atmospheric Administration

Ars Technica: US Congressman subpoenas NOAA climate scientists over study. Unhappy with temperature data, he wants to see the e-mails of those who analyze it.

Ars Technica: Congressman continues pressuring NOAA for scientists’ e-mails. Rep. Lamar Smith seeks closed-door interviews, in the meantime.

Guardian: Lamar Smith, climate scientist witch hunter. Smith got more money from fossil fuels than he did from any other industry.

Wired: Congress’ Chief Climate Denier Lamar Smith and NOAA Are at War. It’s Benghazi, but for nerds. I hope the the importance of independent science is also clear to people who do not consume it on a daily basis.

Mother Jones: The Disgrace of Lamar Smith and the House Science Committee.

Eddie Bernice Johnson, Democrat member of the Committee on Science from Texas reveals temporal inconsistencies in the explanations offered by Lamar Smith for his harassment campaign.

Raymond S. Bradley in Huffington Post: Tweet This and Risk a Subpoena. "OMG! [NOAA] tweeted the results! They actually tried to communicate with the taxpayers who funded the research!"

David Roberts at Vox: The House science committee is worse than the Benghazi committee

Union of Concerned Scientists: The House Science Committee’s Witch Hunt Against NOAA Scientists


Karl, T.R., A. Arguez, B. Huang, J.H. Lawrimore, J.R. McMahon, M.J. Menne, T.C. Peterson, R.S. Vose, and H. Zhang, “Possible artifacts of data biases in the recent global surface warming hiatus”, Science, vol. 348, pp. 1469–1472, 2015. doi: 10.1126/science.aaa5632

Thursday, 15 October 2015

Invitation to participate in a PhD research project on climate blogging

My name is Giorgos Zoukas and I am a second-year PhD student in Science, Technology and Innovation Studies (STIS) at the University of Edinburgh. This guest post is an invitation to the readers and commenters of this blog to participate in my project.

This is a self-funded PhD research project that focuses on a small selection of scientist-produced climate blogs, exploring the way these blogs connect into, and form part of, broader climate science communication. The research method involves analysis of the blogs’ content, as well as semi-structured in-depth interviewing of both bloggers and readers/commenters.

Anyone who comments on this blog, on a regular basis or occasionally, or anyone who just reads this blog without posting any comments, is invited to participate as an interviewee. The interview will focus on the person’s experience as a climate blog reader/commenter.*

The participation of readers/commenters is very important to this study, one of the main purposes of which is to increase our understanding of climate blogs as online spaces of climate science communication.

If you are interested in getting involved, or if you have any questions, please contact me at: G.Zoukas -at- (Replace the -at- with the @ sign)

(Those who have already participated through my invitation on another climate blog do not need to contact me again.)

*The research complies with the University of Edinburgh’s School of Social and Political Sciences Ethics Policy and Procedures, and an informed consent form will have to be signed by both the potential participants (interviewees) and me.

VV: I have participated as blogger. For science.

I was a little sceptical at first, with all the bad experiences with the everything-is-a-social-construct fundamentalists in the climate “debate”. But Giorgos Zoukas seems to be a good guy and gets science.

I even had to try to convince him that science is very social; science is hard to do on your own.

A good social surrounding, a working scientific community, increases speed of scientific progress. That science is social does not mean that imperfections lead to completely wrong results for social reasons, that the results are just a social construct.

Sunday, 4 October 2015

Measuring extreme temperatures in Uccle, Belgium

Open thermometer shelter with a single set of louvres.

That changes in the measurement conditions can lead to changes in the mean temperature is hopefully known by most people interested in climate change by now. That such changes are likely even more important when it comes to weather variability and extremes is unfortunately less known. The topic is studied much too little given its importance for the study of climatic changes in extremes, which are expected to be responsible for a large part of the impacts from climate change.

Thus I was enthusiastic when a Dutch colleague send me a news article on the topic from the homepage of the Belgium weather service, Koninklijk Meteorologisch Instituut (KMI). It describes a comparison of two different measurement set-ups, old and new, made side by side in [[Uccle]], the main office of the KMI. The main difference is the screen used to protect the thermometer from the sun. In the past these were often more open, that makes ventilation better, nowadays they are more closed to reduce (solar and infra red) radiation errors.

The more closed screen is a [[Stevenson screen]], invented in the last decades of the 19th century. I had assumed that most countries had switched to Stevenson screens before the 1920s. But I recently learned that Switzerland changed in the 1960s and in Uccle they changed in 1983. Making any change to the measurements is a difficult trade off between improving the system and breaking the homogeneity of the climate record. It would be great to have a historical overview of such historical transitions in the way climate is measured for all countries.

I am grateful to the KMI for their permission to republish the story here. The translation, clarifications between square brackets and the related reading section are mine.

Closed thermometer screen with a double-louvred walls [Stevenson screen].
In the [Belgian] media one reads regularly that the highest temperature in Belgium is 38.8°C and that it was recorded in Uccle on June 27, 1947. Sometimes, one also mentions that the measurement was conducted in an "open" thermometer screen. On warm days the question typically arises whether this record could be broken. In order to be able to respond to this, it is necessary to take some facts into account that we will summarize below.

It is important to know that temperature measurements are affected by various factors, the most important one is the type of the thermometer screen in which the observations are carried out. One wants to measure the air temperature and therefore prevent a warming of the measuring equipment by protecting the instruments from the distorting effects of solar radiation. The type of thermometer screen is particularly important on sunny days and this is reflected in the observations.

Since 1983, the reference measurements of the weather station Uccle are made in a completely "closed" thermometer screen [a Stevenson screen] with double-louvred walls. Until May 2006, the reference thermometers were mercury thermometers for daily maximums and alcohol thermometers for daily minimums. [A typical combination nowadays because mercury freezes at -38.8°C.] Since June 2006, the temperature measurements are carried out continuously by means of an automatic sensor in the same type of closed cabin.

Before 1983, the measurements were carried out in an "open" thermometer screen with only a single set of louvres, which on top of that offered no protection on the north side. Because of the reasons mentioned above, the maximum temperature in this type of shelter were too high, especially during the summer period with intense sunshine. On July 19, 2006, one of the hottest days in Uccle, for example, the reference [Stevenson] screen measured a maximum temperature of 36.2°C compared to 38.2°C in the "open" shelter on the same day.

As the air temperature measurements in the closed screen are more relevant, it is advisable to study the temperature records that would be or have been measured in this type of reference screen. Recently we have therefore adjusted the temperature measurements of the open shelter from before 1983, to make them comparable with the values ​​from the closed screen. These adjustments were derived from the comparison between the simultaneous [parallel] observations measured in the two types of screens during a period of 20 years (1986-2005). Today we therefore have two long series of daily temperature extremes (minimum and maximum), beginning in 1901, corresponding to measurements from a closed screen.

When one uses the alignment method described above, the estimated value of the maximum temperature in a closed screen on June 27, 1947, is 36.6°C (while a maximum value of 38.8°C was measured in an open screen, as mentioned in the introduction). This value of 36.6°C should therefore be recognized as the record value for Uccle, in accordance with the current measurement procedures. [For comparison, David Parker (1994) estimated that the cooling from the introduction of Stevenson screens was less than 0.2°C in the annual means in North-West Europe.]

For the specialists, we note that the daily maximum temperature shown in the synoptic reports of Uccle, usually are up to a few tenths of a degree higher compared with the reference climatological observations that were mentioned previously. This difference can be explained by the time intervals over which the temperature is averaged in order to reduce the influence of atmospheric turbulence. The climatic extremes are calculated over a period of ten minutes, while the synoptic extremes are calculated from values ​​that were averaged over a time span of a minute. In the future, will make these calculation methods the same by applying the climatic procedures always.

Related reading

KMI: Het meten van de extreme temperaturen te Ukkel

To study the influence of such transitions in the way the climate is measured using parallel data we have started the Parallel Observations Science Team (ISTI-POST). One of the POST studies is on the transition to Stevenson screens, which is headed by Theo Brandsma. If you have such data please contact us. If you know someone who might, please tell them about POST.

Another parallel measurement showing huge changes in the extremes is discussed in my post: Be careful with the new daily temperature dataset from Berkeley

More on POST: A database with daily climate data for more reliable studies of changes in extreme weather

Introduction to series on weather variability and extreme events

On the importance of changes in weather variability for changes in extremes

A research program on daily data: HUME: Homogenisation, Uncertainty Measures and Extreme weather


Parker, David E., 1994: Effect of changing exposure of thermometers at land stations. International journal of climatology, 14, pp. 1-31, doi: 10.1002/joc.3370140102.

Wednesday, 30 September 2015

UK global warming policy foundation (GWPF) not interested in my slanted questions

Mitigation sceptics like to complain that climate scientists do not want to debate them, but actually I do not get many informed questions about station data quality at my blog and when I come to them my comments are regularly snipped. Watts Up With That (WUWT) is a prominent blog of the mitigation sceptical movement in the US and the hobby of its host, Anthony Watts, is the quality of station measurements. He even set up a project to make pictures of weather stations. One might expect him to be thrilled to talk to me, but Watts hardly ever answers. In fact last year he tweeted: "To be honest, I forgot Victor Venema even existed." I already had the impression that Watts does not read science blogs that often, not even about this own main topic.

Two years ago Matt Ridley, adviser to the Global Warming Policy Foundation (GWPF), published an erroneous post on WUWT about the work of two Greek colleagues, Steirou and Koutsoyiannis. I had already explained the errors in a three year old blog post and thus wanted to point the WUWT readers to this mistake in a polite comment. This comment got snipped and replaced with:
[sorry, but we aren't interested in your slanted opinion - mod]
Interesting. I think such a response tells you a lot about a political movement and whether they believe themselves that they are a scientific movement.

Now the same happened on the homepage of the Global Warming Policy Foundation (GWPF).

To make accurate estimates of how much the climate has changed, scientists need to remove other changes from the observations. For example, a century ago thermometers were not protected as well against (solar) radiation as they are nowadays and the observed land station temperatures were thus a little too high. In the same period the sea surface temperature was measured by taking a bucket of water out of the sea. While the measurement was going on, the water cooled by evaporation and the measured temperature was a little too low. Removing such changes makes the land temperature trend 0.2°C per century stronger in the NOAA dataset, while removing such changes from the sea surface temperature makes this trend smaller by about the same amount. Because the oceans are larger, the global mean trend is thus made smaller by climatologists.

Selecting two regions where upward land surface temperature adjustments were relatively large, Christopher Booker accused scientists of fiddling with the data. In these two Telegraph articles he naturally did not explain his readers how large the effect is globally, nor why it is necessary, nor how this is done. That would have made his conspiracy theory less convincing.

That was the start for the review of the Global Warming Policy Foundation (GWPF). Christopher Booker wrote:
Paul [Homewood], I thought you were far too self-effacing in your post on the launching of this high-powered GWPF inquiry into surface temperature adjustments, It was entirely prompted by the two articles I wrote in the Sunday Telegraph on 24n January and 7 February, which as I made clear at the time were directly inspired by your own spectacular work on South America and the Arctic.
Not a good birth and Stoat is less impressed by the higher powers of the GWPF.

This failed birth resulted in a troubled childhood by giving the review team a list of silly loaded questions.

This troubled childhood was followed by an adolescence in disarray. The Policy Foundation asked everyone to send them responses to the silly loaded questions. I have no idea why. A review team should know the scientific literature themselves. It is a good custom to ask colleagues for advice on the manuscript, but a review team normally has the expertise to write a first draft themselves.

I was surprised that there were people willing to submit something to this organization. Stoat found two submissions. If Earth First! would make a review on the environmental impact of coal power plants, I would also not expect many submissions from respected sources.

When you ask people to help you and they invest their precious life time into writing responses for you, the least you can do is read the submissions carefully, give them your time, publish them and give a serious response. The Policy Foundation promised: "After review by the panel, all submissions will be published and can be examined and commented upon by anyone who is interested."

Nick Stokes submitted a report in June and recently found out the the Policy Foundation had wimped out and had changed their plans in July:
"The team has decided that its principal output will be peer-reviewed papers rather than a report.
Further announcements will follow in due course."
To which Stokes replied on his blog:
" report! So what happens to the terms of reference? The submissions? How do they interact with "peer-reviewed papers"?"
The review team of the Policy Foundation now walked back. Its chairman, Terence Kealey a British biochemist, wrote this Tuesday:
"The panel has decided that its primary output should be in the form of peer-reviewed papers rather than a non-peer reviewed report. Work is ongoing on a number of subprojects, each of which the panel hopes will result in a peer reviewed paper.
One of our projects is an analysis of the numerous submissions made to the panel by members of the public. We anticipate that the submissions themselves will be published as an appendix to that analysis when it is published."
That sounded good. The review panel focussing on doing something useful, rather than answering their ordained silly loaded questions. And they would still take the submission somewhat seriously. Right? The text is a bit vague so I asked in the comments:
"How many are "numerous submissions"?
Any timeline for when these submissions will be published?"
I thought that was reasonably politely formulated. But these questions were removed within minutes. Nick Stokes happened to have see them. Expecting this kind of behaviour by now, after a few years in this childish climate "debate", I naturally made the screen shot below.

Interesting. I think such a response tells you a lot about a political movement and whether they believe themselves that they are a scientific movement.

[UPDATE. Reminder to self: next time look in spam folder before publishing a blog post.

Yesterday evening, I got a friendly mail by the administrator of the GWPF review homepage, Andrew Montford, better known to most as the administrator of the UK mitigation sceptical blog Bishop Hill. A blog where people think it is hilarious to remove the V from my last name.

He wrote that the GWPF newspage was not supposed to have comments, that my comment was therefore (?) removed. Montford was also so kind to answer my questions:
1. Thirty-five.
2. This depends on the progress on the paper in question. Work is currently at an early stage.

Still a pity that the people interested in this review cannot read this answer on their homepage. No timeline.

Related reading

Moyhu: GWPF wimps out

And Then There's Physics: Some advice for the Global Warming Policy Foundation

Stoat: What if you gave a review and nobody came?

Sunday, 27 September 2015

AP, how about the term "mitigation sceptic"?

The Associate Press has added an entry into their stylebook on how to address those who reject mainstream climate science. The stylebook provide guidance to the journalists of the news agency, but is also used by many other newspapers. No one has to follow such rules, but journalists and many other writers often follow such style guides for accurate and consistent language. It probably also has an entry on whether you should write stylebook or style book.

The new entry advices to steer clear from the terms "climate sceptic" and "climate change denier", but to use the long form "those who reject mainstream climate science" or if that is too long "climate doubter".

Peter Sinclair just published an interview by the US national public radio (NPR) channel with Associated Press’ Seth Borenstein, who wrote the entry. Peter writes: the sparks are flying. It also sound as if those sparks are read from paper.

How do you call John Christie, a scientists who rejects main stream science? How do you call his colleague Roy Spencer who wrote a book titled: "The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists." How do you call the Republican US Senator with the snowball, [[James Inhofe]], who wrote a book that climate science is a hoax? How do you call Catholic Republican US Senator [[Paul Gosar]] who did not want to listen to Pope talking about climate change? How do you call Anthony Watts, a blogger who is willing to publish everything he can spin into a story against mitigation and science? How do you call Tim Ball, a retired geography professor who likes to call himself climatology professor, who cites from Mein Kampf to explain that climate science is similar to the Big Lie of the Jewish World Conspiracy.

I would suggest: by their name. If you talk about a specific person, it is best to simply use their name. Most labels are inaccurate for a specific person. If positive we may be happy to accept an inaccurate label. A negative label will naturally be disputed and can normally be disputed.

We are thus not looking for a word for a specific person. We are looking for a term for the political movement that rejects mainstream climate science. I feel it was an enormous strawman of AP's Seth Borenstein to talk about John Christy. He is an enormous outlier, he may reject mainstream science, but as far as I know talks like a scientist. He is not representative of the political movement of Inhofe, Rush Limbaugh and Fox News. Have a look at the main blogs of this political movement: Watts Up With That, Climate Etc., Bishop Hill, Jo Nova. And please do not have a look at the even more disgusting smaller active blogs.

That is the political movement we need a name for. Doubters? That would not be the first term I would think of after some years of participating in this weird climate "debate". If there is one problem with this political movement, it is a lack of doubt. These people are convinced they see obvious mistakes in dozens of scientific problems, which the experts of those fields are unable to see, while they just need to read a few blog posts to get it. If you claim obvious mistakes you have two options: either all scientists are incompetent or they are all in a conspiracy. These are the non-scientists who know better than scientists how science is done. These are the people who understand the teachings of Jesus better than the Pope. Without any doubt.

It would be an enormous step forward in the climate "debate" if these people had some doubts. Then you would be able to talk to them. Then they might also search for information themselves to understand their problems better. Instead they like to call every source of information on mainstream science an activist resource to have an excuse not to try to understand the problem they are not doubting about.

I do think that the guidance of the AP is a big step forwards. It stops the defamation of the term that stands for people who advocate sceptical scientific thinking in every aspect of life. The sceptic organisation the Center for Inquiry has lobbied news organisations for a long time to stop the inappropriate use of the word sceptic. The problems the word doubter has is even more true for the term "sceptic". These people are not sceptical at all, especially they do not question their own ideas.

The style guide of The Guardian and the Observer states:
climate change denier
The [Oxford English Dictionary] defines a sceptic as "a seeker of the truth; an inquirer who has not yet arrived at definite conclusions".

Most so-called "climate change sceptics", in the face of overwhelming scientific evidence, deny that climate change is happening, or is caused by human activity, so denier is a more accurate term
I fully agree with The Guardian and NPR that "climate change denier" is the most accurate term for this group. They will complain about it because it does not put them in a good light. Which is rather ironic because this is the same demographic that normally complains about Political Correctness when asked to use an accurate term rather than a derogatory term.

The typical complaint is that the term climate change deniers associates them with holocaust deniers. I had that association before they said it. They are the group that promotes this association most actively. A denier is naturally simply someone who denies something, typically something that is generally accepted. The word existed long before the holocaust. The Oxford English Dictionary defines a denier as:
A person who denies something, especially someone who refuses to admit the truth of a concept or proposition that is supported by the majority of scientific or historical evidence:
a prominent denier of global warming
a climate change denier
In one-way communication, I see no problem with simply using the most accurate term. When you are talking with someone about climate science, however, I would say it is best to avoid the term. It will be used to talk about semantics rather than about the science and the science is our strong point in this weird climate "debate".

When you talk about climate change in public, you do so for the people who are listening in. That are many more people and they may have an open mind. The best way to show you have science on your side is to stick to one topic and go in depth, define your terms, ask for evidence and try to understand why you disagree. That is also what scientists would do when they disagree. Staying on topic is the best way to demonstrate their ignorance. You will notice that they will try everything to change the topic. Attend your listeners to this behaviour and keep on asking questions about the initial topic. To use the term "denier" would only make it easier for them to change the topic.

An elegant alternative is the term "climate ostrich". With apologies to this wonderful bird, that does not put his head in sand when trouble is in sight, but everyone immediately gets the connection that a climate ostrich is someone who does not see climate change as a problem. When climate ostriches venture out in the real world, they sometimes wrongly claim that no one has ever denied the greenhouse effect, but they are very sure it is not really a problem.

However, I am no longer convinced that everyone in this political movement does not see the problem. Part of this movement may accept the framing of the environmental movement and of development groups that climate change will hit the poor and vulnerable people most and like that a lot. Not everyone has the same values. Wanting to see people of other groups suffer is not a nice thing to say in public. What is socially acceptable in the US is to claim to reject mainstream science.

To also include this fraction, I have switched to the term "mitigation sceptic". If you listen carefully, you will hear that adaptation is no problem for many. The problem is mitigation. Mitigation is a political response to climate change. This term thus automatically makes clear that we are not talking about scientific scepticism, but about political scepticism. The rejection of mainstream science stems from a rejection of the solutions.

I have used "mitigation sceptic" for some time now and it seems to work. They cannot complain about the "sceptic" part. They will not claim to be a fan of mitigation. Only once someone answered that he was in favour of some mitigation policies for other reasons than climate change. But then these are policies to reduce American dependence on the Saudi Arabian torture dictatorship, or policies to reduce air pollution, or policies to reduce unemployment by shifting the tax burden from labour to energy. These may happen to be the same policies, but then they would not be policies to mitigate the impacts of climate change.

Post Scriptum. I will not publish any comments claiming that denier is a reference to the holocaust. No, that is not an infringement of your freedom of speech. You can start your own blog for anyone who wants to read that kind of stuff. That does not include me.

[UPDATE. John Mashey suggests the term "dismissives": Global Warming’s Six Americas 2009 carefully characterized the belief patterns of Americans, which they survey regularly. The two groups Doubtful and Dismissive are different enough to have distinct labels.

Ceist, in a comment suggested: "science rejecters".

Many options, no need for the very inaccurate term "doubter" for people who display no doubt. ]

Related reading

Newsweek: The Real Skeptics Behind the AP Decision to Put an End to the Term 'Climate Skeptics'.

Eli has a post on the topic, not for the faint at heart: Eli Explains It All.

Greg Laden defends Seth Borenstein as an excellent journalist, but also sees no "doubt": Analysis of a recent interview with Seth Borenstein about Doubt cf Denial.

My immature and neurotic fixation on WUWT or how to talk to mitigation sceptics in public.

How to talk to uncle Bob, the climate ostrich or how to talk to mitigation sceptics in your social circles.

Do dissenters like climate change?

Planning for the next Sandy: no relative suffering would be socialist.

Thursday, 24 September 2015

Model spread is not uncertainty #NWP #ClimatePrediction

Comparison of a large set of climate model runs (CMIP5) with several observational temperature estimates. The thick black line is the mean of all model runs. The grey region is its model spread. The dotted lines show the model mean and spread with new estimates of the climate forcings. The coloured lines are 5 different estimates of the global mean annual temperature from weather stations and sea surface temperature observations. Figures: Gavin Schmidt.

It seems as if 2015 and likely also 2016 will become very hot years. So hot that you no longer need statistics to see that there was no decrease in the rate of warming, you can easily see it by eye now. Maybe the graph also looks less deceptive now that the very prominent super El Nino year 1998 is clearly no longer the hottest.

The "debate" is therefore now shifting to the claim that "the models are running hot". This claim ignores the other main option: that the observations are running cold. Even assuming the observations to be perfect, it is not that relevant that some years the observed annual mean temperatures were close to lower edge of the spread of all the climate model runs (ensemble spread). See comparison shown at the top.

Now that we do not have this case for some years, it may be a neutral occasion to explain that the spread of all the climate model runs does not equal the uncertainty of these model runs. Because also some scientists seem to make this mistake, I thought this was worthy of a post. One hint is naturally that the words are different. That is for a reason.

Long long ago at a debate at the scientific conference EGU there was an older scientist who was really upset by, where the public can give their computer resources to produce a very large dataset with many different climate model runs with a range of settings for parameters we are uncertain about. He worried that the modeled distribution would be used as a statistical probability distribution. He was assured that everyone was well aware the model spread was not the uncertainty. But it seems he was right and this awareness has faded.

Ensemble weather prediction

It is easiest to explain this difference in the framework of ensemble weather prediction, rather than going to climate directly. Much more work has been done in this field (meteorology is bigger and decadal climate prediction has just started). Furthermore, daily weather predictions offer much more data to study how good the prediction was and how good the ensemble spread fits to the uncertainty.

While it is popular to complain about weather predictions, they are quite good and continually improving. The prediction for 3 days ahead is now as good as the prediction for the next day when I was young. If people really thought the weather prediction was bad, you have to wonder why they pay attention to it. I guess, complaining about the weather and predictions is just a save conversation topic. Except when you stumble upon a meteorologist.

Part of the recent improvement of the weather predictions is that not just one, but a large number of predictions is computed, what scientists call: ensemble weather prediction. Not only is the mean of such an ensemble more accurate than just the single realization we used to have, the ensemble spread also gives you an idea of the uncertainty of the prediction.

Somewhere in the sunny middle of a large high-pressure system you can be quite confident that the prediction is right; errors in the position of the high are then not that important. If this is combined with a blocking situation, where the highs and lows do not move eastwards much, it may be possible to make very confident predictions many days in advance. If a front is approaching it becomes harder to tell well in advance whether it will pass your region or miss it. If the weather will be showery, it is very hard to tell where exactly the showers will hit.

Ensembles give information on how predictable the weather is, but they do not provide reliable quantitative information on the uncertainties. Typically the ensemble is overconfident, the ensemble spread is smaller than the real uncertainty. You can test this by comparing predictions with many observation. In the figure below you can read that if the raw model ensemble (black line) is 100% certain (forecast probability) that it will rain more than 1mm/hr, it should only have been 50% sure. Or when 50% of the model ensemble showed rain, the observations showed 30% of such cases.

The "reliability diagram" for an ensemble of the regional weather prediction system of the German weather service for the probability of more than 1 mm of rain per hour. On the x-axis is the probability of the model, on the y-axis the observed frequency. The thick black line is the raw model ensemble. Thus when all ensemble members (100% probability) showed more than 1mm/hr, it was only rain that hard half the time. The light lines show results two methods to reduce the overconfidence of the model ensemble. Figure 7a from Ben Bouallègue et al. (2013).
To generate this "raw" regional model ensemble, four different global models were used for the state of the weather at the borders of this regional weather prediction model, the initial conditions of the regional atmosphere were varied and different model configurations were used.

The raw ensemble is still overconfident because the initial conditions are given by the best estimate of the state of the atmosphere, which has less variability than the actual state. The atmospheric circulation varies on spatial scales of millimeters to the size of the planet. Weather prediction models cannot model this completely, the computers are not big enough, rather they compute the circulation using a large number of grid boxes with are typically 1 to 25 km in size. The flows on smaller scales do influence the larger scale flow, this influence is computed with a strongly simplified model for turbulence: so called parameterizations. These parameterization are based on measurements or more detailed models. Typically, they aim to predict the mean influence of the turbulence, but the small-scale flow is not always the same and would have varied if it would have been possible to compute it explicitly. This variability is missing.

The same goes for the parameterizations for clouds, their water content and cloud cover. The cloud cover is a function of the relative humidity. If you look at the data, this relationship is very noisy, but the parameterization only takes the best guess. The parameterization for solar radiation takes these clouds in the various model layers and makes assumptions how they overlap from layer to layer. In the model this is always the same; in reality it varies. The same goes for precipitation, for the influence of the vegetation, for the roughness of the surface and so on. Scientists have started working on developing parameterizations that also simulate the variations, but this field is still in its infancy.

Also the data for the boundary conditions (height and roughness of the vegetation), the brightness of the vegetation and soil, the ozone concentrations and the amount of dust particles in the air (aerosols) are normally taken to be constant.

For the raw data fetishists out there: Part of this improvement in weather predictions is due to the statistical post processing of the raw model output. From simple to complicated: it may be seen in the observations that a model is on average 1 degree too cold, it may be known that this is two degrees for a certain region, this may be due to biases especially during sunny high-pressure conditions. The statistical processing of weather predictions to reduce such known biases is known as model output statistics (MOS). (This is methodologically very similar to the homogenization of daily climate data.)

The same statistical post-processing for the average can also be used to correct the overconfidence of the model spread of the weather prediction ensembles. Again from the simple to the complicated. When the above model ensemble is 100% sure it will rain, this can be corrected to 50%. The next step is to make this correction dependent on the rain rate; when all ensemble members show strong precipitation, the probability of precipitation is larger than when most only show drizzle.

Climate projection and prediction

There is no reason whatsoever to think that the model spread of an ensemble of climate projections is an accurate estimate of the uncertainty. My inexpert opinion would be that for temperature the spread is likely again too small, I would guess up to a factor two. The better informed authors of the last IPCC report seems to agree with me when they write:
The CMIP3 and CMIP5 projections are ensembles of opportunity, and it is explicitly recognized that there are sources of uncertainty not simulated by the models. Evidence of this can be seen by comparing the Rowlands et al. (2012) projections for the A1B scenario, which were obtained using a very large ensemble in which the physics parameterizations were perturbed in a single climate model, with the corresponding raw multi-model CMIP3 projections. The former exhibit a substantially larger likely range than the latter. A pragmatic approach to addressing this issue, which was used in the AR4 and is also used in Chapter 12, is to consider the 5 to 95% CMIP3/5 range as a ‘likely’ rather than ‘very likely’ range.
The confidence interval of the "very likely" range is normally about twice as large as the "likely" range.

The ensemble of climate projections is intended to estimate the long-term changes in the climate. It was never intended to be used on the short term. Scientists have just started doing that under the header of "decadal climate prediction" and that is hard. That is hard because then we need to model the influence of internal variability of the climate system, variations in the oceans, ice cover, vegetation and hydrology. Many of these influences are local. Local and short term variation that are not important for long-term projections of global means thus need to be accurate for decadal predictions. The to be predicted variations in the global mean temperature are small; that we can do this at all is probably because regionally the variations are larger. Peru and Australia see a clear influence of El Nino, which makes it easier to study. While El Nino is the biggest climate mode, globally its effect is just a (few) tenth of a degree Celsius.

Another interesting climate mode is the [[Quasi Biannual Oscillation]] (QBO), an oscillation in the wind direction in the stratosphere. If you do not know it, no problem, that is one for the climate mode connoisseur. To model it with a global climate model, you need a model with a very high top (about 100 km) and many model layers in stratosphere. That takes a lot of computational resources and there is no indication that the QBO is important for long-term warming. Thus naturally most, if not all, global climate model projections ignore it.

Ed Hawkins has a post showing the internal variability of a large number of climate models. I love the name of the post: Variable Variability. It shows the figure below. How variable the variability between models is shows how much effort modellers put into modelling internal variability. For that reason alone, I see no reason to simply equate the model ensemble spread with the uncertainty.

Natural variability

Next to the internal variability there is also natural variability due to volcanoes and solar variations. Natural variability has always been an important part of climate research. The CLIVAR (climate variability and predictability) program is a component of the World Climate Research Programme and its predecessor started in 1985. Even if in 2015 and 2016, the journal Nature will probably publish less "hiatus" papers, natural variability will certainly stay an important topic for climate journals.

The studies that sought to explain the "hiatus" are still useful to understand why the temperatures were lower some years than they otherwise would have been. At least the studies that hold; I am not fully convinced yet that the data is good enough to study such minute details. In the Karl et al. (2015) study we have seen that small updates and reasonable data processing differences can produce small changes in the short-term temperature trends that are, however, large relative to something as minute as this "hiatus" thingy.

One reason the study of natural variability will continue is that we need this for decadal climate prediction. This new field aims to predict how the climate will change in the coming years, which is important for impact studies and prioritizing adaptation measures. It is hoped that by starting climate models with the current state of the ocean, ice cover, vegetation, chemistry and hydrology, we will be able to make regional predictions of natural variability for the coming years. The confidence intervals will be large, but given the large costs of the impacts and adaptation measures, any skill has large economic benefits. In some regions such predictions work reasonably well. For Europe they seem to be very challenging.

This is not only challenging from a modelling perspective, but also puts much higher demands on the quality and regional detail of the climate data. Researchers in our German decadal climate prediction project, MiKlip, showed that the differences between the different model systems could only be assessed well using a well homogenized radiosonde dataset over Germany.

Hopefully, the research on decadal climate prediction will give scientists a better idea of the relationship between model spread and uncertainty. The figure below shows a prediction from the last IPCC report, the hatched red shape. While this is not visually obvious, this uncertainty is much larger than the model spread. The likelihood to stay in the shape is 66%, while the model spread shown covers 95% of the model runs. Had the red shape also shown the 95% level, it would have been about twice as high. How much larger the uncertainty is than the model spread is currently to a large part expert judgement. If we can formally compute this, we will have understood the climate system a little bit better again.

Related reading

In a blind test, economists reject the notion of a global warming pause

Are climate models running hot or observations running cold?


Ben Bouallègue, Zied, Theis, Susanne E., Gebhardt, Christoph, 2013: Enhancing COSMO-DE ensemble forecasts by inexpensive techniques. Meteorologische Zeitschrift, 22, p. 49 - 59, doi: 10.1127/0941-2948/2013/0374.

Rowlands, Daniel J., David J. Frame, Duncan Ackerley, Tolu Aina, Ben B. B. Booth, Carl Christensen, Matthew Collins, Nicholas Faull, Chris E. Forest, Benjamin S. Grandey, Edward Gryspeerdt, Eleanor J. Highwood, William J. Ingram, Sylvia Knight, Ana Lopez, Neil Massey, Frances McNamara, Nicolai Meinshausen, Claudio Piani, Suzanne M. Rosier, Benjamin M. Sanderson, Leonard A. Smith, Dáithí A. Stone, Milo Thurston, Kuniko Yamazaki, Y. Hiro Yamazaki & Myles R. Allen, 2012: Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geoscience, 5, pp. 256–260, doi: 10.1038/ngeo1430.

Thursday, 17 September 2015

Are climate models running hot or observations running cold?

“About thirty years ago there was much talk that geologists ought only to observe and not theorise; and I well remember some one saying that at this rate a man might as well go into a gravel-pit and count the pebbles and describe the colours. How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service!”
Charles Darwin

“If we had observations of the future, we obviously would trust them more than models, but unfortunately…"
Gavin Schmidt

"What is the use of having developed a science well enough to make predictions if, in the end, all we're willing to do is stand around and wait for them to come true?"
Sherwood Rowland

This is a post in a new series on whether we have underestimated global warming; this installment is inspired by a recent article on climate sensitivity discussed at And Then There's Physics.

The quirky Gavin Schmidt quote naturally wanted to say something similar to Sherwood Rowlands, but contrasted to Darwin I have to agree with Darwin and disagree with Schmidt. Schmidt got the quote from to Knutson & Tuleya (thank you ATTP in the comments).

The point is that you cannot look at data without a model, at least a model in your head. Some people may not be aware of their model, but models and observations always go hand in had. Either without the other is nothing. The naivete so often displayed at WUWT & Co. that you only need to look at the data is completely unscientific, especially when it is in all agony their cherry picked miniature part of the data.

Philosophers of science, please skip this paragraph. You could say that initially, in ancient Greece, philosophers only trusted logic and heavily distrusted the senses. This is natural at this time, if you put a stick in the water it looks bent, but if you feel with your hand it is still straight. In the 17th century British empiricism went to the other extreme and claimed that knowledge mainly comes from sensory experience. However, for science you need both, you cannot make sense of the senses without theory and theory helps you to ask the right questions to nature, without which you could observe whatever you'd like for eternity without making any real scientific progress. How many red Darwinian pebbles are there on Earth? Does that question help science? What do you mean with red pebbles?

In the hypothetical case of observations from the future, we would do the same. We would not prefer the observations, but use both observations and theory to understand what is going on. I am sure Gavin Schmidt would agree; I took his beautiful quote out of context.

Why I am writing this? What is left of "global warming has stopped" or "don't you know warming has paused?" is that models predicted more warming than we see in the observations. Or as a mitigation sceptic would say "the models are running hot". This difference is not big, this year we will probably get a temperature that fits to the mean of the projections, but we also have an El Nino year, thus we would expect the temperature to be on the high side this year, which it is not.

Figure from Cowtan et al. (2015). Caption by Ed Hawkins: Comparison of 84 RCP8.5 simulations against HadCRUT4 observations (black), using either air temperatures (red line and shading) or blended temperatures using the HadCRUT4 method (blue line and shading). The shaded regions represent the 90% range (i.e. from 5-95%) of the model simulations, with the corresponding lines representing the multi-model mean. The upper panel shows anomalies derived from the unmodified RCP8.5 results, the lower shows the results adjusted to include the effect of updated forcings from Schmidt et al. [2014]. Temperature anomalies are relative to 1961-1990.

If there is such a discrepancy, the naive British empiricist might say:
  • "the models are running hot", 
but the other two options are: And every of these three options has an infinity of possibilities. As this series will show, there are many observations that suggest that the station temperature "observations are running cold". This is just one of them. Then one has to weigh the evidence.

If there is any discrepancy a naive falsificationist may say that the theory is wrong. However, discrepancies always exist; most are stupid measurement errors. If a leaf does not fall to the ground, we do not immediately conclude that the theory of gravity is wrong. We start investigating. There is always the hope that a discrepancy can help to understand the problem better. It is from this better understanding that scientists conclude that the old theory was wrong.

Estimates of equilibrium climate sensitivity from the recent IPCC report. The dots indicate the mean estimates, the horizontal lines the confidence intervals. Only studies new to this IPCC report are labelled.

Looking at projections is "only" the last few decades, how does it look for the entire instrumental record? People have estimated the climate sensitivity from the global warming observed until now. The equilibrium climate sensitivity indicates how much warming is expected on the long term for a doubling of the CO2 concentration. The figure to the right shows that several lines of evidence suggest that the equilibrium climate sensitivity is about 3. This value is not only estimated from the climate models, but also from climatological constraints (such as the Earth having escaped from [[snow-ball Earth]]), from the response to volcanoes and from a diverse range of paleo reconstructions of past changes in the climate. And newly Andrew Dessler estimated the climate sensitivity to be 3 based on decadal variability.

The outliers are the "instrumental" estimates. Not only do they scatter a lot and have large confidence intervals; that is to be expected because global warming has only increased the temperature by 1°C up to now. However, these estimates are on average also below 3. This is a reason to critically assess the climate models, climatological constraints and paleo reconstructions, but the most likely resolution would be that the outlier category, the "instrumental" estimates, are not accurate.

The term "instrumental" estimate refers to highly simplified climate models that are tuned to the observed warming. They need additional information on the change in CO2 (quite reliable) and on changes in atmospheric dust particles (so-called aerosols) and their influence on clouds (highly uncertain). The large spread suggests that these methods are not (yet) robust and some of the simplifications also seem to produce biases towards too low sensitivity estimates. That these estimates are on average below 3 is likely mostly due to such problems with the method, but it could also suggest that "the observations are running cold".

In this light, the paper discussed over at And Then There's Physics is interesting. The paper reviews the scientific literature on the relationship between how well climate models simulate a change in the climate for which we have good observations and which is important for the climate sensitivity (water vapour, clouds, tropical thunderstorms and ice) and the climate sensitivity these models have. It argues that:
the collective guidance of this literature [shows] that model error has more likely resulted in ECS underestimation.
Given that these "emergent constraint" studies find that the climate sensitivity from dynamic climate models may well be too low rather than too high, it makes sense to investigate whether the estimates from the "instrumental" category, the highly simplified climate models, are too low. One reason could be because we have underestimated the amount of surface warming.

The top panel (A) shows a measure for the mixing between the lower and middle troposphere (LTMI) over warm tropical oceans. The observed range is between the two vertical dashed lines. Every coloured dot is a climate model. Only the models with a high equilibrium climate sensitivity are able to reproduce the observed lower tropospheric mixing.
The lower panel(B) shows a qualitative summary of the studies in this field. The vertical line is the climate sensitivity averaged over all climate models. For the models that reproduce water vapour well this average is about the same. For the models that reproduce ice (cryosphere), clouds, tropical thunder storms (ITCZ) well the climate sensitivity is higher.

Concluding, climate models and further estimates of the climate sensitivity suggest that we may underestimate the warming of the surface temperature. This is certainly not conclusive, but there are many lines of evidence that climate change is going faster than expected as we will in further posts in this series: Arctic sea ice and snow cover, precipitation, sea level rise predictions, lake and river warming, etc. In combination the [[consilienceof evidence]] suggests at least that "the observations running cold" is something we need to investigate.

Looking at the way station measurements are made there are also several reasons why the raw observations may show too little warming. The station temperature record is rightly seen as a reliable information source, but in the end it is just one piece of evidence and we should consider all of the evidence.

There are so many lines of evidence for underestimating global warming that science historian Naomi Oreskes wondered if climate scientists had a tendency to "err on the side of least drama" (Brysse et al., 2013). Rather than such a bias, all these underestimates of the speed of climate change could also have a common cause: an underestimate of global warming.

I did my best to give a fair view of the scientific literature, but like for most posts in this series this topic goes beyond my expertise (station data). Thus a main reason to write these posts is to get qualified feedback. Please use the comments for this or write to me.

Related information

Gavin Schmidt wrote the same 2 years ago from a modellers perspective: On mismatches between models and observations.

Gavin Schmidt's TED talk: The emergent patterns of climate change and corresponding article.

Climate Scientists Erring on the Side of Least Drama

Why raw temperatures show too little global warming

First post in this series wondering about a cooling bias: Lakes are warming at a surprisingly fast rate


Cowtan, Kevin, Zeke Hausfather, Ed Hawkins, Peter Jacobs, Michael E. Mann, Sonya K. Miller, Byron A. Steinman, Martin B. Stolpe, and Robert G. Way, 2015: Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures. Geophysical Research Letters, 42, 6526–6534, doi: 10.1002/2015GL064888.

Fasullo, John T., Benjamin M. Sanderson and Kevin E. Trenberth, 2015: Recent Progress in Constraining Climate Sensitivity With Model Ensembles. Current Climate Change Reports, first online: 16 August 2015, doi: 10.1007/s40641-015-0021-7.

Schmidt, Gavin A. and Steven Sherwood, 2015: A practical philosophy of complex climate modelling. European Journal for Philosophy of Science, 5, no. 2, 149-169, doi: 10.1007/s13194-014-0102-9.

Brysse, Keynyn, Naomi Oreskes, Jessica O’Reilly and Michael Oppenheimer, 2013: Climate change prediction: Erring on the side of least drama? Global Environmental Change, 23, Issue 1, February 2013, Pages 327–337, doi: 10.1016/j.gloenvcha.2012.10.008.

Sunday, 30 August 2015

Democracy is more important than climate change #WOLFPAC

I know, I know, this is comparing apples to oranges. This is a political post. I am thinking of a specific action I am enthusiastic about to make America a great democracy again: WOLFPAC is working to get a constitutional amendment to get money out of politics. If I had to chose between a mitigation sceptical WOLFPAC candidate and someone who accepts climate change, but is against this amendment, I would chose the mitigation sceptic.

Money is destroying American politics. Politicians need money for their campaigns. The politician with most money nearly always wins. This goes both ways; bribing the winner is more effective, but money for headquarters and advertisements sure help a lot to win. For the companies this is a good investment; the bribe is normally much smaller than the additional profit they make by getting contracts and law changes. Pure crony capitalism.

This is a cross-partisan issue. Republican presidential candidate Donald Trump boosted:
[W]hen you give, they do whatever the hell you want them to do. ... I will tell you that our system is broken. I gave to many people. Before this, before two months ago, I was a businessman. I give to everybody. When they call, I give. And you know what? When I need something from them, two years later, three years later, I call them. They are there for me. And that's a broken system.
For Democrat presidential candidate Bernie Sanders getting money out of politics is a priority issue. He will introduce "the Democracy Is for People constitutional amendment" and promises "that any Sanders Administration Supreme Court nominee will commit to overturning the disastrous Citizens United decision."

Bribery will not stop with an appeal to decency. It should be forbidden.

The WOLFPAC plan to get bribery forbidden sounds strong. They want to get a constitutional amendment to forbid companies to bribe politicians and want this amendment passed by the states, rather than Washington, because the federal politicians depend most on the corporate funding. They believe that state legislators believe stronger in their political ideals. This is also my impression in local politics as a student; also the politicians I did not agree with mostly seemed to believe in what they said. Once I even overheard a local politician passionately discussing a reorganization to improve services and employee moral, with his girlfriend in a train on a Saturday afternoon.

In Washington it is harder to win against lobbies who have much more money. At the state level election campaigns are cheaper, this makes the voice of the people stronger and a little money makes more impact. This makes it easier for WOLFPAC to influence the elections; try to get rid of politicians who oppose the amendment, reward the ones that work for it.

Even at the federal level there may actually be some possibilities. Corporations also compete with each other. They are thus more willing to fund campaigns that help themselves than campaigns that help all companies. In the most extreme case, if only one company would have to cough up all the money to keep money in politics, this company would be a lot less profitable than all the others that benefit from this "altruistic company". In other words, even if companies have a lot of money, you are not fighting against their entire war chest.

Almost all people are in favour of getting money out of politics. Thus a campaign in favour of it is much cheaper than one against. WOLFPAC was founded by the owner of The Young Turks internet news company, which has a reach that is comparable to the cable new channels. This guarantees that the topic will not go away and that time is on our side. Some politicians may like to ignore the amendment as long as they can, but will not dare to openly oppose such a popular proposal. With more and more states signing on, the movement becomes harder to ignore.

Wealthy individuals may well bribe politicians now, but be in favour of no one being able to do so. Just like someone can fly or drive a car while being in favour of changing the transport system so that this is no longer necessary.

It needs two thirds of the states (34) to call for a constitutional convention on a certain topic. The amendment that comes out of this then has to be approved by three quarter of the states. The beginning is hardest, but at the moment I am writing this, the main hurdle has already been taken: four states—Vermont, California, New Jersey and Illinois—have already called for a constitutional convention, see map at the top. In
Connecticut, Delaware, Hawaii, Maryland, Missouri and New Hampshire, the amendment already passed one of the houses. In many more the resolution has been introduced or approved in committees.

I would say this has a good chance of winning. It would feel so good to get this working. For America and for the rest of the world; given how dominant America is, a functioning US political system is important for everyone. It would probably also do a lot to heal the culture war in America, fuelled by negative campaigning. As such it could calm down the climate "debate", which is clearly motivated by politics and only pretends to worry about the integrity of science. The nasty climate "debate" is a social problem in the USA, which should be solved politically in the USA, no amount of science communication can do this.

A recent survey across 14 industrialised nations has found that Australia and Norway are the most mitigation sceptical countries. This does not hurt Norway because it has a working political system. A Norwegian politician could not point to a small percentage of political radicals to satisfy his donors. In a working political system playing the fool seriously hurts your reputation; it would probably even work better to honestly say you do this because you support fossil fuel companies. The political radicals at WUWT & Co. will not go away, but it is not a law that politicians use them as excuse.

[UPDATE. Politics in Australia also works, just a little slower, mitigation sceptical prime minister Tony Abbott toppled after two years by science accepting Malcolm Turnbull.]

Please have a look at the plan of WOLFPAC. I think it could work and that would be fabulous.