Environment/Nature

How much context is in the Climategate emails? (updated)

Given the release of a second batch of hacked emails yesterday, S&R decided to pull this analysis from 2010 back to the front. The conclusions reached in this analysis are as applicable to the emails published in 2011 just as much as they are to the original emails from 2009.

It is impossible to draw firm conclusions from the hacked documents and emails. They do not represent the complete record, and they are not a random selection from the complete record.
– Dr. Timothy Osborn, Climatic Research Unit (source)

After several hundred hours of studying the emails and looking at their references, I have no hesitation in stating that, to my satisfaction, the system is rotten to the core and has been from the start.
– Geoff Sherrington, former corporate geologist, (source)

According to Osborn, there is not sufficient context to understand the “true” story behind the published Climatic Research Unit emails and documents. However, according to Sherrington, the emails and references contained therein provide all the context needed in order to conclude that climate change research is complete hogwash. Reality lies somewhere on a continuum between these two extremes – the question is where.

S&R set out to determine whether the published CRU emails provided enough context for the public to condemn or vindicate the scientists involved. After investigating three primary options and reading a key study, S&R has concluded that the emails do not themselves contain sufficient context to understand what really happened in climate science over the last 13 years.

Many people have claimed that the emails contain all the context needed to draw wide ranging conclusions about climate scientists and climate research in general. For example, critics claim like Sherrington and Steve McIntyre of Climate Audit that the emails contain overwhelming evidence of scientific misconduct and/or a conspiracy among scientists even though three separate investigations have ruled to the contrary. Some scientists, on the other hand, have concluded that the emails clearly show that nothing serious happened.

S&R contacted Steve McIntyre for his views on the CRU emails. While he was terse in his email communications, he suggested that his writings at Climate Audit were a good place to start. In December of 2009, shortly after the emails were published, McIntyre wrote that

climate scientists say that the “trick” is now being taken out of context. The Climategate Letters show clearly that the relevant context is the IPCC Lead Authors’ meeting in Tanzania in September 1999 at which the decline in the Briffa reconstruction [of tree ring data] was perceived by IPCC as “diluting the message”, as a “problem”, as a “potential distraction/detraction”.(source)

In addition, McIntyre wrote the following in March of 2010 in another post at Climate Audit:

Once again, the fact that the decline is discussed in a Nature paper does not justify the deletion of the inconvenient data in the IPCC spaghetti graph [of temperature proxies, including tree rings] in order to provide the false rhetorical consistency that IPCC was seeking. (source)

These quotes illustrate that McIntyre feels that there is sufficient context within the emails themselves to prove that several climate scientists had deleted “inconvenient data” regarding tree rings in service of a political end, namely the removal of a “potential distraction.” This is a charge that, if true, would constitute a serious breach of scientific ethics.

Other critics have claimed that there is abundant context to prove conspiracy by CRU climate researchers and their US associates. Tom Fuller, co-author of Climategate: The CRUtape Letters (Volume 1) with Steven Mosher, posted excerpts from the book at his website. Two key excerpts are quoted below:

The scientists known as ‘The Team’ [Phil Jones, Michael Mann, Keith Briffa, et al] hid evidence that their presentation for politicians and policy makers was not as strong as they wanted to make it appear, downplaying the very real uncertainties present in climate reconstruction….

But the leaked files showed that The Team had done this by hiding how they presented data, and ruthlessly suppressing dissent by insuring that contrary papers were never published and that editors who didn’t follow their party line were forced out of their position.(source)

These quotes demonstrate that Fuller believes that the emails reveal a conspiracy to overstate the certainty of climate disruption, conceal evidence to the contrary, and were willing to manipulate the peer-review system by “The Team.” This is very much in line with something else Sherrington said:

Yes, there WAS a conspiracy and if you cannot find it [in the emails] then you do not have the innate ability to interpret data. (emphasis original, source)

It’s not just critics that believe the emails contain sufficient context to know what really happened, but some climate researchers make this claim as well. In an interview with S&R, Martin Vermeer, first author of a recent PNAS paper on sea level rise, claimed that “I have plenty context to recognise that none of the allegations hold water, even without seeing the balance of the emails.”

[6/3/10 correction: As per the comments below, Vermeer specifically meant “‘the allegations’ refers specifically to the charges of scientific misconduct / breach of ethics. And ‘plenty context’ includes the scientific literature, and knowing first hand how research is done.”]

At this point, the claims of conspiracy and misconduct made by critics have nearly all been rejected by the first Penn State inquiry, the UK House of Commons inquiry, and the Oxburgh panel. In fact, only a single serious claim levied by critics against climate scientists has been substantiated by any of the investigations – that Phil Jones and the University of East Anglia were not sufficiently open to granting Freedom of Information requests. Given that none of the three inquiries has found scientific misconduct and only one found a possible ethical breach (the FOI issue), it’s reasonable to conclude that the CRU emails alone lack sufficient context for broad claims. This is contrary to what Fuller, McIntyre, and Sherrington have said.

Clearly, reality does not lie close to this end of the continuum. But do the emails contain enough context to make even some limited claims? S&R asked this question of Steven Mosher, and his response was essentially “yes:”

Just as missing data in some areas of climate science doesn’t prevent us from making rational statements about global warming, so too the fact of missing mails does not prevent us from describing clearly what we do know about the mails.

Mosher also said that we know enough context to prove that there was a widespread breakdown in scientific ethics among climate researchers. In addition, Mosher claims that both he and his co-author Tom Fuller feel that the emails revealed nothing that alters the conclusions of climate disruption research to date, saying

[t]he charge that we made in our book was not directed at the science. As we argued, the mails do not and cannot change the science. (emphasis mine)

This claim is inconsistent with the excerpts from their book that Fuller quotes at his blog, and it is squarely in conflict with what McIntyre says.

However, Mosher did feel that there was enough evidence to cast significant doubt upon the ethics of the researchers themselves. For example, Mosher said that it was clear from the emails that CRU’s Phil Jones lied to Parliament when he said it was standard practice to not share data. Two other examples that Mosher used in his S&R interview are quoted below:

For example, the issue of “hiding the decline.” That issue is not about Jones hiding data or manipulating data or committing fraud. That particular instance is about the crafting of a message for politicians….

The mails clearly demonstrate that the scientists were concerned about “diluting” the message. They were not concerned with telling the whole truth, but rather a version of the truth that was packaged according to their agenda.

When S&R asked Schmidt for his opinion on the supposed breaches of ethics, he replied

[t]he only issue that can be classed as serious lapse of judgment is Phil Jones dealings with the FOI requests for the IPCC-related emails.

He pointed out that there was a great deal more context within the emails than is being reported on by media. Specifically, he said

The public/blogosphere discussion is so focused on a tiny bit of the issue (basically MBH98/99 and the CRU surface temperature record to the exclusion of everything else) that most of the contextual things I brought up [at the blog Real Climate] were the fact that most of the discussions had nothing to do with either of these things.

Instead, Schmidt said that the show scientists having substantive discussions and disagreements about “uncertainties in the data, the impacts of different techniques.” To Schmidt, the emails show “the primacy of scientific issues over personalities” and reveal that the scientists mentioned in the emails are “more than willing to have out all the issues.”

The House of Commons investigation did find some problems with how the University of East Anglia and CRU handled FOI requests, it’s apparent that the emails did contain enough context to draw at least one strictly limited conclusion. The House of Commons did not, however, claim that Jones lied in his testimony as Mosher claimed and in fact accepted Jones’ explanation that widespread sharing of computer code and raw data is a recent (Internet age) development that the CRU scientists have not kept up with. In addition, Fuller and McIntyre both claimed that the emails contained enough context to prove scientific misconduct, but Fuller’s co-author Mosher claimed that there wasn’t enough context. Given the apparent inconsistencies even between co-authors of the same book, it’s apparent that the emails alone lack enough context to unambiguously conclude one way or another on the broad issues of ethics and misconduct.

The other end of the continuum claims, as Osborn does, that it is not possible to draw any conclusions from the emails alone. As Mike Hulme, climate scientist from CRU, said in an S&R interview, “No-one ever (ever) has the full story.” If that’s true, then the only way to get to the bottom of what really happened is to talk to the people involved, review other information than just the emails, and so on. A simple analysis of the number of emails published vs. the number of emails sent and received by CRU scientists supports this. However, both McIntyre and Mosher feel that a simple numerical analysis is pointless speculation given what we know about the context of the CRU emails. And for his part, Schmidt says that the inquires completed to date have already investigated the wider context of the emails and found the critics’ points untenable.

S&R surveyed its own members as well as Tom Wigley to estimate how many emails were sent per year by different occupations. We found that

  • approximately 1,500 emails per year sent by the electrical engineer
  • approximately 1,100 emails were sent by the home manager
  • between 2,500 and 3,500 emails sent by the marketing professional
  • about 1,500 emails were sent by the university English professor
  • and about 5,500 emails sent by climate scientist Wigley (with another 33,000 received emails).

If we estimate that the S&R writers surveyed each receive three emails for every email sent, then we get a yearly total of 6,000 emails, 4,400 emails, 10,000 emails, and 6,000 emails respectively for the S&R writers plu a total of about 39,000 emails per year for Wigley. Over the course of 13 years and for a 15-member workgroup (the period of the CRU emails and the size of the CRU), the total for both the electrical engineer and the English professor is 1.17 million emails, 858k emails for the home manager, a minimum of 1.95 million emails for the marketing professional, and 7.51 million emails for Wigley’s. This compares to about 1100 emails published from CRU’s servers. If we treated the emails as data, then we’d be drawing conclusions based on 0.01% (climatology) to 0.13% (home management) of the data that has also been selected using unclear criteria for unclear reasons.

Mosher rejects this data-like approach, however, believing that it ignores smoking guns.

As a defense the appeal to missing context is laughable. Imagine the account who authorizes the issuance of millions of checks over a lifetime of service. Imagine finding one which he writes to himself embezzling a million dollars. Can he appeal to the millions of good checks he wrote to divert attention from the bogus one?

In addition, both Mosher and McIntyre believe that this approach is inconsistent or hypocritical as it relies upon a defense of “we don’t know, so reality must be the way we want it to be.” For example, McIntyre said

If Osborn wishes to argue that the emails are mitigated by context, then he should provide the other emails that demonstrate the mitigation. Until they do so, I don’t see any reason to take seriously the idea that additional emails would show mitigation or why it is unreasonable to proceed on the record that is available.

Similarly, Mosher wrote

Again, if Osborn and Briffa and Jones want to supply a context, either by answering questions or producing mails, they are free remediate their reputations. They choose not to supply additional evidence. They can’t. Because there is no context that makes what they did right….

There isn’t any context which can make it better. If there was, they would produce it.

This is inconsistent with Mosher’s earlier statement that “the appeal to missing context is laughable.” He is engaging in the very same speculation about context that he dismissed as “below serious discussion” and “intellectual buffoonery” in his S&R interview, except he’s appealing to missing data that, in his opinion, would only strengthen, not weaken, support for his criticisms.

The numerical analysis suggests that, as Osborn said above, there’s not enough emails to understand their context. It would take an inquiry or three to truly understand what happened and why, a point that Schmidt made as well:

Tim Osborne is absolutely correct that there is much more context than is in the emails – and much of this has been brought up in the various inquiries – and that strongly supports the contention that no misconduct or wrongdoing occurred. (emphasis original)

This investigation has largely rested upon logic rather than on data. But there is some research data upon which we can make stronger conclusions. Specifically, Jorge Aranda and Gina Venolia wrote a paper titled The Secret Life of Bugs: Going Past the Errors and Omissions in Software Repositories that was published in the Proceedings of the 31st International Conference on Software Engineering. It reports on research the authors did on the reliability of electronic records like software bug databases. However, their methods and conclusions have a much broader application to the question of the reliability of all research that is based exclusively on electronic records like the published CRU emails.

Aranda and Venolia started by looking at randomly chosen records in a electronic bug tracking database and extracting as much information as they could from the information stored in the database. The authors then contacted all the people mentioned and interviewed them to get a better understanding of the status of the bugs, what occurred, who was responsible, etc. The authors reviewed email records, documentation, and any other artifacts related to the bug that they could find, and they always tracked the bugs to their origination and completion. In the process, the authors described four different levels of analysis:

  1. “automated analysis of bug record data”
  2. “automated analysis of electronic conversations and other repositories”
  3. “human sense-making”
  4. “direct accounts of the history by its participants”

The authors performed their analysis at Level 4, the most detailed level. For comparison, the CRU emails themselves represent a Level 2 analysis, the work of McIntyre, Mosher, Fuller, most bloggers, and many journalists are examples of a Level 3 analysis, while the major inquiries represent a Level 4 analysis.


Tables 2 and 3 from the Aranda/Venolia paper, showing how many new
events and new participants were discovered at each different level
of analysis.

What the authors found in the course of their analysis was that “the differences between levels were stark, quantitatively and qualitatively.”

In fact, even considering all of the electronic traces of a bug that we could find (repositories, email conversations, meeting requests, specifications, document revisions, and organizational structure records), in every case but one the histories omitted important details about the bug. (emphasis mine)

In more specific terms, the paper found that the electronic records including the email records were missing or had incorrect data, failed to include events that were critical to solving the bug, didn’t describe structural issues and problems related to group dynamics and internal company politics, and had very little explanation of why things were done. For example, the authors found that the steps required to reproduce a bug, the list of corrective actions taken to try and fix the problem, and the root cause were often missing from the electronic records. The bugs often had lifespans that started in advance of the official record or ended either far before or well after the bug was actually declared “fixed.” The authors found that the officially responsible person (ie the bug’s “owner”) was not the person actually responsible for fixing the bug 34% of the time and were totally unrelated 11% of the time. Furthermore, in 7% of the bugs, all of the people listed in the electronic record had no relationship at all to the bug.

One of the key results of the paper was that, in all cases, Aranda and Venolia found that they couldn’t understand the rationale behind the activity described in the artifacts without actually talking to the people involved. For example, they couldn’t understand why a bug languished unchanged for months but was then suddenly fixed after a period of furious activity, why some bugs weren’t fixed at all, or why someone else filed a bug report even though the bug was suspected to be a false alarm.

The results all point to a few key conclusions. First, “electronic repositories hold incomplete or incorrect data more often than not.” Second, “the bugs in our case pool had far richer and more complex stories than would appear by automatically collecting and analyzing their electronic traces.” Third, Level 4 analyses do not just produce longer stories – the stories “change qualitatively in ways that are deeply relevant to the study of coordination.” And finally,

It is unrealistic to expect all events related to a bug to be found in its record or through its electronic traces. Naturally, most face-to-face events left no trace in any repository. But in some occasions, the key events in the story of a bug had left no electronic trace; the only way to discover them was through interviews with the participants.

So what does this mean for the context of the published CRU emails? Can we trust analyses of the purely electronic record of the published CRU emails alone to provide us the context we need to understand whether there was scientific misconduct or ethical breaches? The answer has to be “no,” and Aranda and Venolia state this directly:

The most striking lesson from our cases is the deep unreliability of electronic traces, and particularly of the bug records, which were erroneous or misleading in seven of our ten cases, and incomplete and insufficient in every case. (emphasis mine)

This is indirectly supported by Mosher’s own analysis of the emails. He pointed out that

[t]here are enough of these mails, mails which have nothing of import, to suggest that the mails were collected and filtered by a harvesting program. A program that looked for certain authors, and certain key words. (emphasis mine)

Automated filtering and processing of emails is, at best, a Level 2 analysis, but what is needed to truly understand the context of the emails is a Level 4 analysis.

Mike Hulme also agrees with Aranda’s and Venolia’s conclusions, saying

[t]he released emails are only a fraction of all the correspondence between the relevant scientists, not to mention telephone calls, breakfast conversations, texts, etc., etc., etc.

If, as the paper’s authors found, in every one of their cases the electronic records that had been searched by automated means were “incomplete and insufficient,” then reality lies somewhere between “we have enough context to draw limited conclusions” and “we don’t have enough context to draw any conclusions.”

This leaves us in the position of having to rely on the results of the five different inquires and investigations that have been completed or will be completed soon. McIntyre pointed out that “it’s a matter of record that the Oxburgh and Penn State inquiries didn’t take any submissions or testimony from Climategate targets or critics,” and by the guidelines of the paper, this could in fact be a problem. However, Schmidt pointed out that

The House of Commons inquiry did take submissions – and the critics have dismissed that too. Muir-Russell has as well, and I guarantee the same complaints will be heard when that reports too.

As with scientific research, when multiple lines of investigation all come to the same conclusions, the likelihood that the conclusion is correct increases significantly. This is also true of the Climategate inquiries – having one or two failing to ask McIntyre his opinion or accept outside submissions doesn’t automatically negate their conclusions, especially when the conclusions are similar to those inquiries that did take his submissions. There are multiple possible reasons for why anyone’s input might not be sought out.

In his interview, Mosher appeared to agree with the Aranda and Venolia that a Level 4 analysis was warranted.

To put the emails in a full context in every case would require one thing. Somebody with knowledge of the mails sitting down with Jones, Briffa, Osborn and others to ask them a few simple questions.

However, the inquiries have already done what Mosher suggests and yet he, McIntyre, and other critics remain unwilling to accept the conclusions of the inquiries.

Given the demonstrated unreliability of electronic records that have been sorted or analyzed using automated tools, it’s unreasonable to make firm claims either of scientific misconduct, ethical lapses, or illegality based on only the published CRU emails. It takes full inquiries and investigations where the investigators talk with the involved parties to truly understand the details and the context surrounding claims like those made against the climate scientists mentioned in the published CRU emails. To date, three such inquires have been completed, and while there may be some areas where the inquiries can be fairly criticized, the fact that the results of all three agree with each other strongly suggests that Tim Osborn’s claim, rather than Geoff Sherrington’s, is closer to correct in this case – “It is impossible to draw firm conclusions from the hacked documents and emails.”

Image Credits
University of East Anglia
IPCC TAR WG1 Figure 2-21
IPCC AR4 WG1 cover
UK Parliament House of Commons
Proceedings of the 31st International Conference on Software Engineering

Special thanks to Prof. Steve Easterbrook, who pointed me toward the Aranda and Venolia paper.

112 replies »

  1. whoa, call me impressed. You should send this link to that wacko Attorney General in Virgina. Not that it will matter–as you demonstrate here, some people just will not be convinced.

  2. Nice work. I was dubious of the computer-bug reference at first, but you clearly used it for a very important reason. I’m sure Mr. Mosher and Mr. McIntyre will have something to say about how this doesn’t change anything, and how some emails still prove misconduct, but the emails are such a small part of the picture without context.

    • Wuf – I’ve got a companion piece to this coming out next that will build upon your point some.

      Morgan – I wasn’t sure when Easterbrook pointed it out to me that it was going to be applicable, because software bug databases are not equivalent to email records. But when I saw just how great a quantitative difference there was between email records (Level 2) and records generated using Levels 3 and 4 in the two tables shown above, it was clear that the results of this study did apply.

      It’s also important to note, although it didn’t really fit in this piece, that the same unreliability issues apply to conclusions drawn from FOIA-obtained records – documentation is usually out-of-date, automated keyword searches don’t provide enough context, participants will be missed or included who shouldn’t be, and so on. FIOA-obtained information is a key investigative tool, but you have to talk to the people involved to truly understand what’s going on.

  3. There is certainly enough context in the emails to falsify most of the worst accusations, even if we don’t have a complete context. There is clear evidence of quote mining by McIntyre and others. Are you going to cover that?

  4. For clarity, in my comment ‘the allegations’ refers specifically to the charges of scientific misconduct / breach of ethics. And ‘plenty context’ includes the scientific literature, and knowing first hand how research is done (hmmm, does this qualify as Level 3.5? 🙂 )

    Nice ref to Aranda and Venolia!

    • I’m sorry, Martin – I thought about adding a note in the quote to make that clear, but when I re-read our conversation for ITS context, I wasn’t sure which you had actually meant. So I decided to leave it unchanged. I’ll add a correction.

      My personal opinion is that a literature review qualifies as a Level 3, but I don’t know what Aranda and Venolia would classify such a thing as. In this case, however, we might be trying to pigeonhole too much into their relatively simple description of levels of analysis.

  5. I am pleased that you contacted Steve McIntyre while writing this, and sad that you didn’t try and contact me.

    I stand unequivocally by what I wrote in collaboration with Steve Mosher. Some of the emails are so egregious as to need no context. (Would you please delete all emails regarding AR4?) For many others, as we wrote in our book, context makes their actions look worse, not better. We did provide context in our book, drawing from a variety of sources, including Climategate emails. They present a damning picture.

    Given the remits of the varying investigating panels, their length in session (1 day?) and length and breadth of their reports (5 pages?), and the wide net they did not cast in gathering evidence, relying on them will not convince anyone who has followed this subject that your point is valid. Nor will extrapolated totals of email correspondence.

    As both Mosher and McIntyre have noted, the principals involved had and still have the opportunity to provide context that mitigates their statements. They haven’t. The reason they haven’t is that they have behaved despicably, and not just towards skeptics. They did real disservice to the institutions they work for, their colleagues, those in the scientific community relying on the even-handedness of scientific process regarding publication of findings, and to the policy-makers who were led to believe that AR4 and the summary for policy makers was free of bias.

    • Tom – I interviewed your co-author, Steven Mosher, at length. He said almost exactly in our exchanges what you said. However, that doesn’t change the fact that there are inconsistencies between the quotes from your book that you posted at your website and what Mosher said and what you are saying now. It also doesn’t change the fact that basing your work largely on an electronic record makes its conclusions questionable.

  6. AGW is real, the CRU folks are innocent, McIntyre et. al. are liars … but this article is repetitive drivel and (from my perspective as a software engineer of 45 years) your rhetorical use of the Aranda and Venolia paper is completely bogus. Of course there are facts about bugs (like why they were fixed when they were fixed) that cannot be determined from the electronic record, but that doesn’t mean that nothing of import can be determined. What is going on with the CRU emails is that they were stolen and filtered for political purposes and people with political agendas are using them to draw conclusions that are flatly inconsistent with well known fact. Martin Vermeer is exactly right to say “I have plenty context to recognise that none of the allegations hold water, even without seeing the balance of the emails.” That context includes facts not contained in the emails themselves, but readily available to anyone without holding their own hearing or interview.

    • Ianam – I think you’ve misread the piece.

      How much of the context that you used to know that there was nothing untoward going on in the emails was gleaned because you have been following climate science for an extended period, or because you read blogs, or you read the Factcheck analysis of the emails, or you read the AP’s analysis? That would be a “human sense-making” or Level 3 analysis. The public have been asked to accept that the emails alone show sufficient evidence to condemn or vindicate, but the Aranda/Venolia paper doesn’t support that.

      Looking at the tables from Aranda and Venolia pictured above, most of the quantitative changes occurred going from a Level 2 to a Level 3 analysis. But the authors pointed out that there were still major qualitative changes between Levels 3 and 4.

      Apparently my main point wasn’t sufficiently clear, so here it is again – any analysis that rests purely on emails that have been filtered according to keywords or some other automated method is fundamentally flawed because there usually isn’t enough context to draw firm conclusions. The only sure-fire way to ensure that your conclusions are accurate is to investigate the context outside the electronic record, and the best way to do that is to conduct full investigations like those in progress or already complete. That those investigations all agree with each other to date adds strength to their overall conclusions (that there has been no identified scientific misconduct and only one example of a possible breach of ethics, the FOI issue) just as independent lines of evidence strengthen scientific conclusions.

  7. Huh. I wrote “That context includes facts not contained in the emails themselves, but readily available to anyone without holding their own hearing or interview.” without having seen Vermeer’s post above.

    “I wasn’t sure which you had actually meant”

    Which shows how clueless you are.

  8. Brian, if by inconsistencies you mean minor variations in comments on weblogs, you might find one or two. However, we used various sources for our book in addition to the Climategate emails in creating a timeline for analysis and a historical record of what had been said and written.

    The larger point is the correctness of what we wrote and are willing to say again now, six months later. We maintain that we have seen as much context as is publicly available for the emails, and would be happy to examine more. Where there is sufficient context to say plainly that the scientists involved were wrong, we have said so.

    Not one person has come to us challenging anything written in our book. Since it has sold well (much better than expected), if there were factual errors I have no doubt we would have heard. If there were more charitable explanations than the ones we offered, I have no doubt we would have heard them, too.

    Although it appears that no charges will be filed, there is no doubt in my mind that provisions of the UK Freedom of Information Act were knowingly and intentionally violated.

    Although it appears there will be no disciplinary action, I have no doubt that graphic figures presented to policy makers were intentionally doctored to make it unnecessary to explain uncertainties in the paleoclimatic temperature reconstructions used.

    Although it appears that there will be no collegial pushback, I have no doubt that the scientists involved in the Climategate correspondence acted with no sense of shame or ethical responsibility regarding the publication, editing and refereeing of their own and competing papers.

    The fact that people are trying to defend this behaviour, explain it away, or say we have no proof is just more evidence of how low this argument has sunk. But then, it never really reached a high level, did it?

    • Tom – Did you fact-check all your claims with the scientists involved before you made them? Given that you published your book a little over a month after the emails were published, I suspect not. And if not, then how can you say that you got the full story? Based on the research I did for this piece, you can’t.

      As for inconsistencies, allow me to point you to two.

      First, in your comment above, you referred to only the Oxburgh review, but not the first part of the Penn State inquiry or the House of Commons inquiry. In all three cases, the panels asked unnamed CRU scientists, Michael Mann, and Phil Jones to explain the context. In all three cases, at least some of the panelists were familiar with the contents of the emails, especially the ones that McIntyre and others have been trumpeting. In all three cases, the panelists ruled that there were not significant issues with scientific misconduct or ethical behavior with the notable exception of Jones/UEA and FOI requests. In other words, three times people have done exactly what Mosher asked them to do in order to establish the context surrounding the emails – yet you, Mosher, McIntyre, and a whole slew of others people reject the conclusions. That’s inconsistent.

      Second, at your website, you posted excerpts of your book. One of them says “We note that discussion of Climategate, as the scandal has been dubbed, is full of defenders of these scientists who characterize the emails as a ‘tempest in a teapot,’ saying that ‘boys will be boys’ and that the science isn’t affected. We don’t think any of those statements are true. (emphasis mine)” Above, I quote Mosher as saying that you both state in the same book that “the mails do not and cannot change the science.” Either the science is affected or it is not – only one can be true. And this is not a minor point on a blog. I suppose that there’s a chance that you disagree with each other on this point, but if so, that’s a pretty significant disagreement.

      What will you do if the Muir Russell review did all the fact-checking you very likely didn’t, took submissions like McIntyre demands, looked at all your pet issues, and yet still finds that the only problem revealed in the emails was the FOI issue (which is a real problem, as I said above)? Will you take the line that McIntyre has already shown he’ll use and say that the review is compromised by the members of the panel, or will you accept the review’s results?

  9. Brian, if I may jump in on your reply to Ian, FOI is not an ethical standard. It is a law. None of the investigations have done what you claimed they have, and they have been very explicit in saying they have not done so.

    The Penn State inquiry had no allegations of wrongdoing to investigate and did not look for any. They did not interview witnesses or look at outside records.

    The UK Parliamentary Inquiry held one day of sessions, did not look at outside context, and avoided the science.

    The Oxburgh Inquiry read 11 papers proffered by the University of East Anglia, none of which were involved in any controversy, did not interview any outside witnesses, did not look at outside material for context, and wrote a 5 page paper saying they were basically okay but did not use statistical methods properly.

    You are claiming exoneration where none exists. Unless you are writing only for the converted, it’s not a good thing for you to do.

    • Actually, Tom, you’re wrong on all three points to some extent.

      The Penn State inquiry looked into 4 charges:

      Did you engage in, or participate in, directly or indirectly, any actions with the intent to suppress or falsify data?
      Did you engage in, or participate in, directly or indirectly, any actions with the intent to delete, conceal or otherwise destroy emails, information and/or data, related to AR4, as suggested by Phil Jones?
      Did you engage in, or participate in, directly or indirectly, any misuse of privileged or confidential information available to you in your capacity as an academic scholar?
      Did you engage in, or participate in, directly or indirectly, any actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities?
      (source, pages 2&3)

      Those are all allegations of wrongdoing even though they’re not formal and specific accusations. The inquiry panel found that in three of four cases, there was no evidence to support them. The fourth (#4) is still being investigated and we’ll have the results of that one by early July. Furthermore, the panel reviewed outside records including the 377 emails that mentioned Michael Mann (page 3), they reviewed “the relevant information, including the above mentioned e-mails, journal articles, OP-ED columns, newspaper and magazine articles, the National Academy of Sciences report entitled “Surface Temperature Reconstructions for the Last 2,000 Years,” ISBN: 0-309-66144-7 and various blogs on the internet.” (page 3), they interviewed Mann himself on January 12 (page 4), contacted Gerald North of the NAS report on Mann’s MBH98 paper (page 4), and talked to Donald Kennedy of Stanford University (page 4).

      The House of Commons report held one day of sessions but took hundreds of pages of submissions from interested parties around the world, quoting many of them in their final report. That’s where I got Tim Osborn’s quote, for example. You are correct, however, that they didn’t cover the science.

      The Oxburgh Inquiry interviewed the CRU scientists twice for a total of 15 person-days (intro paragraph 2), asked for and received additional information (intro paragraph 3), and reviewed criticism of the dendrochronological record (dendro paragraph 9). Given the brevity of the report, I am of the opinion that this inquiry could have been handled better, but that doesn’t give me reason to believe that the results are false.

  10. Tom Fuller writes:

    “We did provide context in our book, drawing from a variety of sources, including Climategate emails. ”

    …except of course the scientists involved or a complete look at the relevant studies.

    Fuller/Mosher started out with a narrative, then fixed the facts to fit that narrative. Lawyers might approve, but that’s isn’t journalism. Fuller/Mosher did not care to examine the relevant published studies (i.e. on the divergence problem) which puts “hide the decline” in context, or even seek input from the scientists involved who would be most familiar with the discussions. Doing so would not help their narrative.

    There is another level of missing context that’s detailed by DeepClimate’s blog. Sometimes relevant portions of the email text have been deliberately removed. McIntyre and others have engaged in this behavior.

  11. Tom Fuller states …. “The Penn State inquiry had no allegations of wrongdoing to investigate and did not look for any. ”

    What the report actually says: “At the time of initiation of the inquiry, and in the ensuing days during the inquiry, no formal allegations accusing Dr. Mann of research misconduct were submitted to any University official.”

    So we are left to wonder why, if there is such cast-iron evidence of malpractice by Mann, Fuller and Mosher publish a ‘quickie’ book without following the basic journlistic standard of first contacting those you are about to traduce for their side of the story, why they have expended thousands of words in cyberspace [containing many factual errors e.g. characterising a joke about a misprint as an attempted coverup], but apparently lacked the cohones to make a formal complaint…. choosing instead to whine after the fact?

    Has Mr Fuller forgotten his words to Tim Lambert, after Tim leaked a part of a one of HIS mails:

    “I actually don’t believe men of honour publish correspondence without permission. Nor do I believe men of honour would select portions of the email that don’t correspond to the entire message”

    http://scienceblogs.com/deltoid/2009/11/tom_fuller_and_senator_inhofe.php#comment-2044149

    Apparently profiting from a book reliant on correspondence published without permission – well, that’s perfectly honourable…..

  12. “I actually don’t believe men of honour publish correspondence without permission. Nor do I believe men of honour would select portions of the email that don’t correspond to the entire message”

    Since Tom Fuller is actually commenting here, I too would like to hear his explanation for how his execrable commentary on the stolen emails squares with this statement. Seeing as, on the face of it, the evidence is that he is less a ‘man of honour’ and more a blatant and shameless hypocrite.

  13. “The public have been asked to accept that the emails alone show sufficient evidence to condemn or vindicate”

    Of course they have, but only a grossly dishonest person would ask that anyone do that — we don’t need Aranda and Venolia to tell us that statements can be ambiguous due to lack of context.

    And I did not misread the piece; you misrepresented people like Marin Vermeer as saying that one can determine from the emails alone, with no other knowledge whatsoever, that the allegations are false. He of course did not say that; no one has said that.

    Here’s my basic point: we do not need a technical analysis in order to recognize or demonstrate the immense intellectual dishonesty of the denial crowd, and such an analysis creates the false impression that there is something subtle or difficult about this.

    • ianam said you misrepresented people like Marin Vermeer as saying that one can determine from the emails alone, with no other knowledge whatsoever, that the allegations are false. He of course did not say that; no one has said that.

      Unless you’re Martin, you have no way to know that. You aren’t currently privy to the question I asked Martin nor the entirety of the answer he gave me. As such, you just committed the same sin of drawing too much of a conclusion from too little data that I wrote this piece about. Given that everything he and I said was on-the-record (something I cleared with him before interviewing him – the same is true of all the people I’ve quoted here), I can provide you with the entire context. My question is in italics, his answer is not.

      Tim Osborn was quoted in the House of Commons investigation report that cleared CRU and Phil Jones of wrong-doing as saying that it was “impossible to draw firm conclusions from the hacked documents and emails. They do not represent the complete record, and they are not a random selection from the complete record.” However, some people I’ve interviewed so far have claimed that Osborn’s claim doesn’t matter, that it’s trivial because there’s enough context contained within the hacked emails to know enough of the story to draw conclusions. What do you think?

      Of course Tim Osborn is right in noting the highly selected nature of the stolen mails. As an outsider, I do differ a bit with Osborn though in that I find that I have plenty context to recognise that none of the allegations hold water, even without seeing the balance of the emails.

      Both the “hide the decline” and “travesty” memes that were the first to break free after the hack, could do so only by looking away from the published literature. The pattern repeated itself over and over again over a string of allegations — a ‘Gish Gallop’ game: when one is refuted, the accusers don’t ever acknowledge this, they just move on to the next one unfazed.

      The literature is one part of the relevant context; another part is the tacit knowledge that scientists have but these critics don’t, about how science as a process functions; e.g., that peer review is about quality control not censorship, and the responsibility of the peer community: it is the community’s proper job to make sure it works as intended, and to act collectively against attempts at subverting it.

      After reading and re-reading this several times, I made a judgment call that the context was not sufficiently clear to justify inserting an editor’s note saying “[about scientific misconduct]” following the word “allegations” in Martin’s answer. As I was already much later on posting this piece than I wanted to be, I decided that I would run this without the note rather than take the time to ask Martin for a clarification. I made a conscious decision that I would run a correction – which I added immediately after Martin’s clarification – if one was needed.

      I made a judgment call that you don’t like. OK, that’s fine – won’t be the first time or the last. But the fact I made one doesn’t negate the point of the post or conclusions in any way.

      As an electrical engineer in my day job, I agree that a technical analysis shouldn’t have been necessary. However, a technical analysis like this reaches a certain group of people on a certain level. Consider that maybe my target audience wasn’t necessarily someone like you.

  14. Tom Fuller:

    “But the leaked files showed that The Team had done this by hiding how they presented data, and ruthlessly suppressing dissent by insuring that contrary papers were never published and that editors who didn’t follow their party line were forced out of their position.”

    Really? By what superpowers did they “insure” (sic) this? And exactly how can you determine, from the emails alone, that they achieved this?

    “I have no doubt that the scientists involved in the Climategate correspondence acted with no sense of shame or ethical responsibility regarding the publication, editing and refereeing of their own and competing papers.”

    You have no doubt about other people’s inner feelings? That’s not a very objective or scientific attitude.

    I don’t know whether you feel any shame for your actions and statements, but you should. You should particularly feel shame about misleading people about the state of climate science, the reality of AGW, and the consequences we face because of it.

  15. To anyone who has gone through the scientific publication process, the idea that you can “force out” an editor you don’t like is ridiculous. It turns the power relationship on its head.

    I suppose you could complain to the editorial board or organization that owns the journal and hope they heed your call (which is unlikely), but this isn’t forcing out someone. It’s hoping that someone else forces out someone.

    Is there anything to back up this bizarre claim?

    • Area-man: It depends on what the claim is based on. The only reference I’m aware of is to the editors of the journal Climate Research, and if that is the source of Mosher and Fuller’s claim, it is demonstrably false and has been shown so repeatedly.

  16. Unless you’re Martin, you have no way to know that.

    No, actually, I do: it’s called inference from external information. Sheesh. And your comment about Martin is particularly silly when I said that no one has done that — surely being Martin wouldn’t give me any additional way to know that.

    I made a judgment call that you don’t like.

    It’s not that I don’t like it, it’s that it was foolish and obviously wrong … and Vermeer has confirmed that it was wrong.

  17. As such, you just committed the same sin of drawing too much of a conclusion from too little data that I wrote this piece about.

    I want to further emphasize the utter wrongness of this. I did not draw my conclusion solely from the words of Vermeer that you quoted, any more than Vermeer drew his conclusion solely from the emails. Failing to understand how someone can conclude, as he did, that the allegations don’t hold water, or how I can conclude that he did not draw that conclusion solely from the emails, is extraordinarily inept. Also, you wrote “you have no way to know that” — but as a scientist, I don’t “know” things, I make the best inferences possible from all the available evidence. As these posts are not formal papers, I do not include error bars.

  18. P.S. Since you’re blathering about making conclusions from too little evidence, how did you reach the conclusion that “It’s not just critics that believe the emails contain sufficient context to know what really happened, but some climate researchers make this claim as well.” referring to Martin Vermeer? We “know” now, from Martin’s clarification, that this claim is false. I assert that it was obviously false, but even if I’m full of beans, you are the one who made the positive claim. You chalk this up to “a judgment call”, but for some reason when I assert something I must “know” it.

  19. However, a technical analysis like this reaches a certain group of people on a certain level.

    Really? Who?

    Consider that maybe my target audience wasn’t necessarily someone like you.

    Consider that maybe I’m not so stupid as to need that condescending advice, and consider that maybe it’s a non sequitur. Again, we all know that statements extracted from emails might be ambiguous without further context — even people like Tom Fuller know that, even if it suits their purposes to deny it. As I said, your article obscures that clear reality by making out that it is subtle or difficult.

    • This kind of analysis reaches people like the libertarian engineers I work with every day. It reaches smart people who might not otherwise be inclined to listen by putting the emails into a frame that they understand almost instinctively. This is a group of people who tend to be skeptical (and I mean that in the genuine sense, not the “AGW skeptic” sense) and who are ideologically inclined to ignore the overwhelming science underlying anthropogenic climate disruption, but for whom data matters even more than ideology. For this target audience, referencing a study with actual data that is closely related to their professional expertise gets them to stop and reconsider what they think they know with respect to the CRU emails.

      You think we all know this, ianam, but you’re wrong. If we did, then we wouldn’t be having this disagreement in the first place because the world would already be far down the road to decarbonizing the energy supply.

      As for my judgment call, I explained my rationale and put the quote in context for you and everyone. You disagree that the context justified my use of the quote. OK. You’ve pointed out that I should probably make another correction given that Martin clarified his point and that another sentence of mine doesn’t make sense any more given the clarification. Good point, and I’ll fix that too.

  20. “The only reference I’m aware of is to the editors of the journal Climate Research…”

    That did occur to me, but a case of editors who resign in protest of their own volition is of course not even remotely close to editors being “forced out” by people with no authority over them. If that’s what Fuller is referring to, it’s an egregious lie.

    Since I’m sure Fuller would never do such a thing, I was hoping he’d explain where in the emails the evidence for this exists. Since I’ve read most of the emails that have been specifically flagged by skeptics and haven’t seen one that supports this claim (or for that matter, any of their claims), I’m going to have to assume that it’s made-up until proven otherwise.

  21. “Is there anything to back up this bizarre claim?”

    It depends on what the claim is based on.

    Well duh.

    Fuller asserted that “the leaked files showed” it; I asked him above to tell us exactly how he determined it from that source. Let’s see if he does.

  22. More P.S.

    After reading and re-reading this several times, I made a judgment call that the context was not sufficiently clear to justify inserting an editor’s note saying “[about scientific misconduct]” following the word “allegations” in Martin’s answer.

    This is completely off the wall. I never referred to Martin’s clarification that he was talking specifically about allegations of scientific misconduct — I was commenting on your absurd charge that “It’s not just critics that believe the emails contain sufficient context to know what really happened, but some climate researchers make this claim as well” — as evidence quoting Martin, which it is now clear was ripped out of context. Again, I wrote

    you misrepresented people like Marin Vermeer as saying that one can determine from the emails alone, with no other knowledge whatsoever, that the allegations are false. He of course did not say that; no one has said that.

    You say I can’t know that, but I did know it in re Martin at the time I wrote it because he had said so: “‘plenty context’ includes the scientific literature, and knowing first hand how research is done”. And now we know that you knew your charge was false at the time you made it because Martin had said to you:

    The literature is one part of the relevant context; another part is the tacit knowledge that scientists have but these critics don’t, about how science as a process functions; e.g., that peer review is about quality control not censorship, and the responsibility of the peer community: it is the community’s proper job to make sure it works as intended, and to act collectively against attempts at subverting it.

    That is not him saying “the emails contain sufficient context to know what really happened” as you falsely claimed.

    • I disagree – I felt at the time that the context of his quote was sufficiently vague given the content, but I can see how you would disagree.

      Thank you for providing such a rigorous example of a Level 4 analysis, and illustrating how constraining ourselves to lower levels of analysis can produce unsatisfactory conclusions, for all of us to witness.

  23. Well, glad to see I started a bit of a food fight here. It’s late enough that I doubt if I’ll be able to respond to everything above, but let’s start with the obvious. I invited all of the scientists to be interviewed, they all declined. As for publishing others’ emails, I received the emails first–I could have had the scoop of the year. I refused to publish and wrote a column on why. Then Real Climate published them so I figured they were fair game. If Gavin Schmidt could publish them, why not?

  24. you’re wrong

    No, You’re wrong!

    If we did, then we wouldn’t be having this disagreement in the first place because the world would already be far down the road to decarbonizing the energy supply.

    Regardless of whether I am wrong or not, this is an absurd inference — so absurd that I’m convinced that it is a waste of time talking to you further. Bye.

  25. Just to be clear, Bud, Phil and ianam, Tim the Deltoid posted part of one of my emails after a long exchange. I thought it was crappy behaviour then and I think it’s crappy behaviour now. But typical.

    If you don’t see the difference between that and me commenting on emails that some of the authors have published on a weblog, that’s fine. But I didn’t publish any of the emails until Real Climate started doing so.

  26. Fuller writes:

    “If you don’t see the difference between that and me commenting on emails that some of the authors have published on a weblog, that’s fine. “

    Tom didn’t you publish a book, you weren’t just commenting on the emails.

    Your comment at deltoid was: “‘Men of Honour don’t publish emails without permission” , is that correct?

    Did you get that permission?

  27. Poor Tom Fuller believes he was taken out of context. I’m not interested on Tom Fuller’s explanation for:

    “I actually don’t believe men of honour publish correspondence without permission. Nor do I believe men of honour would select portions of the email that don’t correspond to the entire message.”

    After all, he has taught us that it’s not important to have context from the authors of email correspondence in order to accurately determine misconduct. The evidence by itself is already damning and is all the information I need to support my narrative. No further investigation needed.

  28. MarkB, thankyou. Fuller’s second sentance is also relevant:

    Nor do I believe men of honour would select portions of the email that don’t correspond to the entire message.”

    Tanks for providing that context.

  29. So which is the more glaring tell (in the poker sense) that you’re dealing with someone with an, uh, flexible attitude towards the truth: someone who loudly asserts that they are an “engineer” or someone who tries to assert instant ersatz rules for what “men of honor” do on the Internet?

  30. Also, the real problem with any current analysis of the emails and what they “mean” is that the full story is fundamentally incomplete until we know exactly who stole them, and why.

    Please let us not forget that the only actual victim of a prosecutable crime to have yet emerged out of this brou-ha-ha is…. well, Phil Jones et al.

  31. I suppose that someone who hasn’t followed the relevant topics over the years may not have context to understand the exchanges in the Climategate E-mails. Still many aspects do stand on their own such as the Phil Jones E-mail to delete correspondence to prevent release via the UK FOIA. There is no way to claim that is out of context, the subject line was FOIA and the instructions were specific. But put in context it becomes worse. Jones was asking E-mails between IPCC Chapter 6 lead author Briffa and WahlAmmann be deleted. They were working outside the IPCC process in order to have a rebuttal to McIntyre & McKittrick GRL 2005 make it into the IPCC AR4. There is evidence that Gene Wahl reviewed and provided suggested revisions to the second order draft based on the papers Wahl and Ammann were working to have published prior to the IPCC deadline. It was this issue that David Holland tried to uncover with his FOIA request and in response to this issue that Phil Jones asked people to delete E-mails so they would not be provided via Holland’s FOIA request. Context does matter and the more one is aware of the context the worse things look.

    I have to say that the “electronic record makes its conclusions questionable” argument may be the weakest defense of Climategate I’ve seen in the past 7 months. It’s an exercise in logical fallacy. The suggestion seems to be that the only way to obtain context is to ask the authors…. which is rather silly. Why not see what the critics have to say, if they can actually put these E-mails in context? People who sent FOIA requests that were denied who then discovered the astounding behavior shown in the Climategate E-mails. There has been much written about these E-mails (which do include many E-mail chains providing context themselves) and the context is explained. Because some electronic records are not reliable that doesn’t necessarily apply to all electronic records and it’s quite a leap to apply that fallacy to E-mails and E-mail strings. In this case there were 1,000 plus E-mails and documents released many of which did contain context.

    • JimR – Thank you for raising the point about the Jones/Wahl/Briffa thing (you forgot Overpeck), as I’ll be covering that in detail on Monday or Tuesday next week. Suffice it to say the emails you’re referencing do not mean what you think they mean.

      You’ve missed several key points in the post, however. First, you missed that the approximately 1100 emails represent no more than 0.1% of the entire email record, and as Mosher pointed out, they were likely collected using automated key word filters. Second, you missed that Aranda and Venolia state specifically that automated data collection produces records that are “deeply unreliable.” Third, you missed that the only way you can make heads or tails out of records like this is to talk to the people involved, and that means more than just the critics like McIntyre – but Fuller admitted that he was unable to talk to them. Fourth, if you look at Tables 2 and 3 from Aranda and Venolia, you’ll find that in all cases there were quantitative changes in number of events and/or agents for all the cases that were not detected until Level 3 or 4 analyses were conducted.

      Where’s the logical fallacy in saying that the more people you talk to, the more context you get? Where’s the logical fallacy in saying that, if you only talk to critics, you’re not getting the full story?

  32. Brian, thanks. This nicely illustrates your point, doesn’t it 😉

    ianam, if everybody were a scientist and familiar with the relevant literature, Brian’s piece would indeed be superfluous. But they are not, and it is not.

    Tom, there is a reason folks refuse to be interviewed. Hint: it has to do with trust.

  33. Tom Fuller wrote: “The Penn State inquiry had no allegations of wrongdoing to investigate”.

    The title of the Penn State inquiry report is “Concerning the Allegations of Research Misconduct Against Dr. Michael E. Mann”.

  34. Thanks for the discussion on our paper, Brian.

    Just a small clarification on our levels of analysis: in our study, we began from bug databases, so the bug record is our Level 1, and the other electronic information traceable to the record (emails that mention it, IM conversations, etc.) are Level 2. In the CRU case, the released emails are the starting point; no other electronic records are used in the analysis, and even the emails are, as you point out, a tiny fraction of the total. So conclusions drawn from these emails are at or below our Level 1, even if the analysis is impartial.

    In any case, as you point out, the inquiries delve much deeper into the context of these emails, and they’re the ones to be paying attention to.

    • Jorge – Thank you for clearing up my error! If you don’t mind a question (I read that you’re defending your PhD soon, so I totally understand if you don’t have the time), why wouldn’t emails = emails for this case, as I incorrectly concluded? Is it because the published emails equal Level 1, a full analysis of ALL the emails in the CRU, PSU, UVa, etc. archives would equal a Level 2, and then Levels 3 and 4 are unchanged?

  35. Well, in the context of our paper, some researchers take only a single electronic source to produce their findings, and some link several electronic sources. The latter is significantly more time consuming, and valuable. So we wanted to differentiate between the two—hence the two first levels. In the context of the CRU case, we would need to determine which electronic sources are available to a researcher. Your most recent classification (published emails = Level 1, etc.) seems appropriate. But this is a minor quibble, and I don’t think it warrants a correction in your post.

    Thanks for your good wishes!

  36. JimR writes:

    “Why not see what the critics have to say, if they can actually put these E-mails in context? ”

    For the most part, the critics are not getting close to establishing the correct context. Part of that is ignorance of the scientific issues being discussed, in other cases, in other cases emails are interpreted through conspiracy theorist glasses (every action NASA scientists does is “evidence” of a fake Moon landing coverup), and in many cases the misrepresentation is deliberate. Some need to take a critical look at the claims from “critics”, rather than accepting what they say at face value. The inquiries are doing that.

    Tom Fuller writes:

    “I invited all of the scientists to be interviewed, they all declined.”

    I’ll bet supermarket tabloids use the same cover, before publishing dubious interpretation of evidence.

  37. Brian, thanks for your reply. I look forward to your next installment.

    I understand your position that the Climategate E-mails represent a small fraction of the total E-mails over 13 years. I simply reject the premise that this negates the clear context in the E-mails we do have available. The same goes for the Aranda and Venolia statement that automated data collection produces records that are “deeply unreliable.” While that may be true in some circumstances in this case we don’t know how these were collected and we do have E-mail chains that provide ample context to determine what is being said.

    I didn’t miss that the only way you can make heads or tails out of records like this is to talk to the people involved, I simply reject as absurd the idea that the only way to understand is to hear from the authors themselves. There has been ample time for these authors to explain the context and they have no interest in this… which is understandable in light of the E-mails released. But this doesn’t mean we don’t have context as you assert. This is a fallacy, we have context to understand the E-mails.

    Your fourth point you again reference Aranda and Venolia which really is a poor analogy for this case. The premise is still that we don’t have context to understand which is false.

    The logical fallacy is in claiming that the Aranda and Venolia is an appropriate analog to the Climategate E-mails and based on this claiming there is no context to understand the Climategate E-mails. We’ve been hearing for months that there is not enough context but this is the most absurd reasoning for that position to date. While these E-mails may be a small percentage of the total we see little personal or routine communication. Rather than assuming the small percentage makes context impossible a case could be made that the small percentage may be the most relevant portion of the E-mails providing a large amount of context.

  38. How about we just stick to those issues that are played out rather fully in the e-mails? I think there is a pretty good case to be made that there is at least one instance of scientific ethics breaking, and perhaps fraud. You can read it here, at Millard Fillmore’s Bathtub, a real smoking gun:

    http://timpanogos.wordpress.com/2009/11/22/smoking-guns-in-the-clr-stolen-e-mails-a-real-tale-of-real-ethics-in-science/

    You’ll notice that the fraud is on the side of warming deniers. Pay careful attention to the “revenge” the scientists pursue: They redoubled research efforts, and wrote papers contrary to the errors, to be published in science journals, for all to see.

    Scientists are a notch above their critics in ethics and virtue, as we can plainly see.

  39. There has been ample time for
    these authors to explain the context and they have no interest in this…

    JimR, that is just not true. Tim Osborn, Tom Wigley, Ben Santer and many others (thanks Gavin!) have gone to great lengths doing precisely that, using publically available context. You haven’t been paying attention.

    Perhaps you should forgive them for not making the balance of their emails (i.e., the as yet un-stolen ones) available to you. There is more than enough raw material for malicious misrepresentation out there already.
    Rest assured that the various investigators (who don’t “leak”) have access to these mails as well.

  40. Martin Vermeer said:

    “Rest assured that the various investigators (who don’t “leak”) have access to these mails as well.”

    It is telling, then, that the scientists get almost glowing reviews from these investigators, isn’t it?

  41. Ed, I don’t quite get what you’re trying to say.
    No, I don’t expect the un-stolen emails made any difference one way or the other… I have a pretty good idea what they look like — just like my emails, mostly boring. But sure, there may be other heroic tales in there like the one you dug up…
    My guess is that what decided it for Oxburgh cs. was simply taking Phil Jones’ wise admonition to heart: ‘I wish people would read my papers rather than my mails’ 🙂

  42. Martin, I agree I must have missed this. Could you please point me to Osborn, Wigley and Santer providing context for their E-mails? And what about the other authors (Jones, Briffa, Mann, Wahl, etc.)? Do they also provide context?

    “Rest assured that the various investigators (who don’t “leak”) have access to these mails as well.”

    Really? Which group of investigators would this be?

  43. Really? Which group of investigators would this be?

    Those who investigated Mann at his university, for one.

    So, you’re ignorant of such things, which makes this:

    There has been ample time for these authors to explain the context and they have no interest in this…

    a statement of ignorance.

    And which also makes it clear as to which sources you depend on for “information”.

  44. JimR, about Santer, read, e.g.,
    http://www.realclimate.org/index.php/archives/2009/12/more-independent-views-myles-allen-and-ben-santer/
    http://www.realclimate.org/index.php/archives/2010/02/close-encounters-of-the-absurd-kind/
    Then, Gavin Schmidt wrote several articles to which also Tom Wigley (at least) contributed, e.g.
    http://www.realclimate.org/index.php/archives/2010/02/the-guardian-disappoints/
    In all of these, the context of the most prominent of the emails is addressed.
    There was also a very good series of interviews by an American lady with Mike Mann touching upon the emails, but I don’t find the link (not at my desktop right now).
    Look also at the official submission of CRU to the Muir Russell investigation. Muir Russell is one of the groups I referred to, another being the Penn State investigators. The latter saw at least Mann’s emails.

  45. Martin Vermeer said:

    “My guess is that what decided it for Oxburgh cs. was simply taking Phil Jones’ wise admonition to heart: ‘I wish people would read my papers rather than my mails’

    Isn’t it too bad that such a view doesn’t “go without saying?” He’s right, and you’re right, of course.

    Maybe the thing to do is to put a copy of the papers in e-mail, in hopes someone will hack the e-mail and pay attention.

  46. Martin, interesting that you disagree that the authors of the Climategate E-mails themselves have provided context and then post a bunch of Realclimate links and a link to the CRU submission to Muir Russell. Some of the Realclimate links are simply pages that link to other pages such as Santer’s rather angry letter. None of the Santer pieces seem to be Santer explaining the context of his few E-mails. That was the point, have the authors explained the context of their own E-mails?

    In one of the Realclimate pieces on the Guardian and long time environmental writer Fred Pearce’s series, RC focuses on Wigley providing context that he never agreed there was fraud regarding Jones 1990.

    Wigley wrote to Phil Jones:

    “Seems to me that Keenan has a valid point. The statements in the papers
    that he quotes seem to be incorrect statements, and that someone (WCW
    at the very least) must have known at the time that they were incorrect.

    Whether or not this makes a difference is not the issue here.”

    Did Wigley or Gavin provide context for Wigley’s statement that Keenan has a valid point? Of course not. Rather than providing any context for Wigley’s E-mail Gavin never even addresses what Wigley did write, instead deflecting attention about claims of fraud then attacking Douglas Keenan. Pearce wrote about Wigley’s concerns, Gavin did what Gavin typically does and no real context was gained.

    Along the same lines Phil Jones did provide a bit of context in Nature just the week before the RC piece where he acknowledges the data in question was lost and some stations probably did move and that paper may need a correction:

    http://blogs.nature.com/climatefeedback/2010/02/climatologist_phil_jones_fight.html

    So there are minor points of context from some of the authors available along with spin from all sides in the blogosphere. But do you think adequate context has been provided by the authors themselves? Not Gavinesque RC spin but actual context? Brian said he will be writing on the Jones/Wahl/Briffa thing next week which should be interesting. Do you feel that the principles in that have explained the context?

  47. JimR,
    this is getting silly. Yes, I hold that the various players here, especially Gavin, have laid out the context in which these emails should be seen. That you don’t believe Gavin’s account is not surprising, but doesn’t change the facts.
    Your insistence that individual scientists separately provide context for their individual emails is equally silly. As I pointed out this is public information, so Gavin is in as good a position as anyone to address what is in fact an attack on the community. And IMHO he did so in a more than satisfactory way.
    (And contrary to what you imply, Wigley did comment on his own email you quote, pointing out how his concerns were addressed and giving his current view on the matter. Don’t force me to dig it up.)

    But it is clear that we won’t agree on this… and I am happy that you are now aware that various inquiries are indeed looking at all of the emails.

  48. Martin,

    Don’t force me to dig it up.

    I wish you would. When I do a search for “Wigley +Keenan” pretty much 100% of it is commentary on the original emails (with obligatory nutty assumptions) and 0% is Wigley’s response. The closest I could find is this:

    http://www.abc.net.au/worldtoday/content/2009/s2766202.htm

    …which I only managed to find by clicking through an Andrew Bolt article.

  49. Martin, it wasn’t me who brought up the authors providing context. Personally I feel that the E-mails themselves provide ample context. But you jumped in and told me it was false that the authors haven’t provided context, yet you seem to point mainly to your favorite blogger’s spin and not the authors providing context. You seem to be without an actual point. I’m happy to agree to disagree here. Just keep in mind that if I were to reference some blog post (especially one with Gavin’s level of spin) you would likely not believe it. Likewise it doesn’t provide a compelling argument to point to Gavin’s spin where he doesn’t even address the context of Wigley’s E-mail in trying to claim that the authors provided context. Possibly you became so wrapped up in Gavin’s words you missed the fact he never even addressed the contents of Wigley’s E-mail.

    One interesting aspect of the inquiries we’ve seen is they have shown little focus on the E-mails and not provided the context that is the topic of this discussion.

  50. > I make the best inferences possible from all the available evidence.

    Impressive. Not plausible, but impressive.

  51. JimR, yes, we must agree to disagree here. What you are clearly missing is that RC is not just ‘some blog’ but the community blog of the research scientists under attack. Gavin is one of the few that found the energy, and did so big time, but it is clear that he speaks authoritatively for the other members of this research community of which he is an active member (and got his own mails stolen) himself.

    About Wigley, Gavin writes

    However, Tom Wigley has subsequently passed on later conversations to me showing very clearly that he did not support Keenan’s allegations of ‘fabrication’ and the implication that he does here are very misleading.

    Here, the difference between you and me is that you seem to think Gavin Schmidt is making things up, while I know that he doesn’t. Not a good basis for a fruitful discussion.

    Anyway this particular reference to a private correspondence is the exception. Mostly (and also here) Gavin is pointing out the obvious, things that any working scientist knows or can find out in the open literature (AKA “context”). It is sad that you fail to see that.

    Now that I am at my desktop again, I’ll give some further links to help the readers make up there own minds:

    http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack/
    http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack-context/
    http://www.realclimate.org/index.php/archives/2009/12/cru-hack-more-context/

    (Note the names on the latter two. The last one doesn’t offer much — most everything had already been said at that point.)

  52. Martin, I’m glad you have so much confidence in Gavin and Realclimate. I’m sure you realize that not everyone shares your feelings. Many have seen years of censorship and spin from RC. I suppose everyone has a blog they like and support but caution should be used to still have a critical eye for spin and bias.

    Regarding Gavin on the Wigley E-mail, it’s not that I feel Gavin is making anything up. Wigley expressed in an E-mail that Keenan had a valid point and someone must have known. If you take a step back you might see that Gavin doesn’t clear anything up in saying later communications with Wigley showed he didn’t support “Keenan’s allegations of ‘fabrication’ “. That doesn’t negate that Wigley had expressed that Keenan had a valid point and someone must have known about the problem. We now know that Keenan did have a point, that stations histories were not available and there were contradictions. So Gavin’s slight of hand in saying Wigley later said he didn’t agree about fraud carefully avoids the issue that Douglas Keenan did have a point. Instead of addressing that issue Gavin goes on to take shots a Keenan. A blog is after all a blog and no matter if it is run by a climate scientist deflecting from the heart of the subject and attacking someone they don’t agree with is a very blog like strategy.

  53. Yes, I agree. There probably was a mistake in the method description in the 1990 papers; sloppiness at the writing stage, who knows. Stuff happens, and did so two decades ago. Not anything that influences the results, which have also been well replicated by later work, but an error nevertheless.

    You are being more than a bit disingenuous though pretending that the argument was about there being or not being a mistake. It wasn’t and you must know it. The argument — Keenan’s point — was about the accusation of fraud. There’s a world of difference between an honest misformulation and intentional dishonesty. Dunno about Gavin spinning, but you’re no stranger to the concept 😉

    …and Keenan is a ‘piece of work’… unless you’re claiming Gavin made that one up too. What you call taking shots, I call context. Relevant such.

  54. Just to help JimR out here:

    mis·take   [mi-steyk]
    1. an error in action, calculation, opinion, or judgment caused by poor reasoning, carelessness, insufficient knowledge, etc.
    2. a misunderstanding or misconception.

    fraud (frôd) n.

    1. A deception deliberately practiced in order to secure unfair or unlawful gain.

    2. A piece of trickery; a trick.

    All of us make mistakes. Few of us commit fraud.

    Wigley’s belief that Keenan had a point about a mistake having been made does not lead to the conclusion that Wigley agreed with Keenan that fraud was committed. Wigley has made that clear.

    No different than Martin’s post above, actually, where he agrees that there was probably a boo-boo, but does not agree that fraud was committed.

    One thing JimR has made clear is that he trusts denialist sources and does not trust scientists, so further attempts to educate him are probably futile.

  55. Dhogaza, yes, and one of the perks of a science training and actual research experience is that it inoculates you not to fool yourself, e.g., about which ‘science bÃlogs’ are the real thing and which are from la-la land… if only that was an easily transferable skill. JimR is projecting if he thinks that everybody is picking a favourite blog because it says what he wants to hear. Dutch proverb: ‘the landlord trusts his tenants like he trusts himself’.
    …and RC’s censors are doing a lousy job of it, seeing all the muck that is getting through 😦

    So much nonsense, so little time…

  56. Martin,

    “You are being more than a bit disingenuous though pretending that the argument was about there being or not being a mistake. It wasn’t and you must know it.”

    Far from pretending the argument was about a mistake I keep quoting Tom Wigley’s E-mail with the closing line: “Whether or not this makes a difference is not the issue here.” I wonder if you even have curiosity about the actual context of Wigley’s E-mail. Gavin wrote:

    “However, Tom Wigley has subsequently passed on later conversations to me showing very clearly that he did not support Keenan’s allegations of ‘fabrication’ and the implication that he does here are very misleading.”

    And that was enough for you? From the context of the Climategate E-mails I wouldn’t have thought Wigley supported Keenan’s allegations of ‘fabrication’. But it was clear that Wigley was concerned about Wei-Chyung claiming he had exonerating data when other papers showed the station data didn’t exist. Wigley was very concerned in his May 2009 E-mail, although he expressed concerns about Wei-Chyung being a “sloppy scientist” but said nothing about fraud or fabrication. RC shifts the focus from Wigley agreeing that Keenan had a point to fraud. There doesn’t have to have been fraud committed for Keenan to have had a point. There doesn’t have to have been fraud for Tom Wigley to have “had serious concerns about the affair” as Pearce wrote and Gavin objected to but Wigley clearly expressed in his E-mail.

    Again this was about context and RC focused only on fraud and then shots at Keenan never even addressing what Wigley actually wrote. You were upset at my comment about the authors not providing context, but you seem to have mistaken your favorite blog as being the authors providing context. The authors went to “great lengths” to explain the context you said…

  57. Actually … on second thought … please do.

    I’ll let you ponder that one for awhile.

  58. Here’s an email that doesn’t need much of the context stuff:

    “I concede all of your points but add one other thought. It is my
    grandchildren I worry about and I suspect their grand children
    will find it exceedingly warm because sunspots will return and
    carbon abatement is only a game; It wont happen significantly
    in their lifetime AND IT WONT BE ENOUGH IN ANY CASE. HENCE _WE
    WILL NEED A GEOENGINEERING SOLUTION_ COME WHAT MAY!
    -gene”

    So, I often have these arguments with people who think AGW is a scam and it’s all about grant funding….

  59. As I’ve noted over at CA in response to the OP’s comment(s) there, this comparison to A&V’s paper is quite thin.

    their methods and conclusions have a much broader application to the question of the reliability of all research that is based exclusively on electronic records like the published CRU emails.

    One must be careful to note the limitations of their subject matter and methodology:
    * They used a random selection of bug database records. The data collection under comparison here (Climategate) was not randomly selected by any means.
    * The electronic bug database being tested was full of human-entered metadata (e.g. “owner”, “type”, etc) that was often wrong and/or corrected over time. An email record is abutomagically tagged, correctly, with date, time, source and destination. I.e., automated, correct metadata.
    * The message content of an electronic bug database is peripheral to the actual work, in the sense that a variety of other communications, records, etc are needed to complete the necessary bug-fix work. Any error in a bug database has no material impact on whether the bug itself is correctly fixed or not. However, the Climategate emails are an extract from the core communication work of the team involved. It is not plausible, nor has the OP suggested, any additional communication that would conceivably modify let alone negate the damaging facts conveyed by the messages in the dataset.

    A more apropos comparison: suppose the electronic data were a damaging set of recordings of telephone calls or face to face meetings, instead of email.

    While it is true that other meetings may have taken place, so such a set of recordings is woefully incomplete, that’s immaterial. If the recorded conversations are themselves damaging, no other recordings are needed.

    The analogy is quite obvious, and is why some parties were incredibly upset about the PR damage by the very name “Climategate.” Many of us are well aware of the previous incident where recorded conversations showed certain parties in a very bad light.

    Bottom line: the A&V paper doesn’t apply and provides no cover for the damaging facts revealed by these emails. Particularly since Jones has admitted that he really did hide the decline.

    Why is it so hard to just admit the truth? The proxy data doesn’t consistently reflect temperature, therefore we don’t have good paleoclimate proxies. We can’t be so certain about the uniqueness of recent warming, so we need more scientific work to produce good paleoclimate proxies. Big deal.

  60. Martin Vermeer, I think the best thing you’ve written on this thread is that bit about knowing which sites are valid and which are not–it’s way up there.

    For example, a blog that tries to adopt a dispassionate and objective tone about the subjects it covers and ends up telling people they don’t really need to read critical literature might be valid.

    But if regular readers like yourself adopt a dispassionate tone here and call opponents such as myself ‘vile’ on another weblog, people might get the idea that this blog and this post and this thread were deceptive attempts to sound ‘sciency’ while doing a hatchet job on Mosher, McIntrye and myself.

    [Portion of this comment moved to where the original comment appeared, namely here]

  61. Mr Pete (#78): Allow me to address your points.

    First, you are correct that A&V used a random selection of bugs from a large record. This means that their results are broadly applicable to bug records. That the CRU emails were selected isn’t to the question of whether the A&V paper’s results apply or not. Random selections mean you can apply conclusions to other related samples, while conclusions drawn from selections can only be applied to the selections.

    This is essentially equivalent to public opinion polling – you can draw conclusions on the entire country based on a small sample (assuming you can correct for self-selection biases among the population you’re polling) if the sample is randomly drawn. However, self-selected polls (the American Association for Public Opinion Research calls these “SLOP”) only apply to the people who respond to the poll and cannot be used to draw any conclusions about the population at large.

    So, the question is whether or not the results described in A&V apply more broadly to other electronic records, specifically emails. I’ll address that shortly.

    Second, you’re only partly correct on the metadata. Bug repositories often enter ID numbers and time/date tags automatically and most modern trackers I’m familiar with also automatically enter the “filed by” fields when a new bug is entered. This is equivalent to the from, date, and similar fields in an email. The other inputs into the bug tracker are manually entered, but so are the emails’ contents. The bulk of the useful information in both emails and bug trackers are human-entered and thus subject to human error.

    We can also look at the A&V paper to see if they cover the differences between a bug repository and emails discussing the contents of the bug repository, and the answer is clearly “yes.” Their “Level 2” analysis corresponds to an automated analysis of email records. In this case, they’re investigating whether an automated analysis of the emails produced a correct record as compared to . We can see this quantitatively by looking at Tables 2 and 3 (reproduced below) and seeing how many events and agents were identified when transitioning from a Level 2 to a Level 3 analysis (human sense-making). In every case, transitioning from looking at only the email record (and the complete email record, not a self-selected email record as in the case of CRU) to human-based analyses turned up additional events, and in every case but one, there were changes in who was involved in fixing the problems.

    This same effect was observed qualitatively as well. Here’s some quotes from the paper that specifically mention errors in the email or general electronic records:

    People that took actions concerning the bug were often not mentioned in the record or in email communications (C2, C3, C7, C9, C10). [4.2.3]

    The extent of a participant’s contribution was easy to misjudge based on electronic traces: high frequency and intensity of interaction did not imply high level of contribution.[4.2.3]

    It is unrealistic to expect all events related to a bug to be found in its record or through its electronic traces. Naturally, most face-to-face events left no trace in any repository. But in some occasions, the key events in the story of a bug had left no electronic trace; the only way to discover them was through interviews with the participants. [4.2.5]

    Some of the concrete discrepancies we found might be acceptable for large-scale automated analyses of coordination. Others, such as most of the People issues in 4.2.3 and the missing links from bug records to source code change-sets, are far more serious. [6.2]

    The clearest finding in our study, the difference between the minable version and the true version of a bug’s history, should not be Microsoft-specific, as it depends not on corporate culture but on the amount and quality of the information that can be economically and efficiently captured electronically. [7]

    Third, you are correct that the content of the bug database is peripheral and serves as a storage location rather than a core communication link like email. However, for your overall point to hold, the email communications would also have to be peripheral rather than a core communication method. The tables above, however, quantitatively reject that assertion.

    Aranda pointed out above that my comparison to a Level 2 analysis is not quite right even though I think it more correctly captures the type of information included in the electronic record. Instead, the emails themselves represent the base level, or Level 1. He explains it a little better at his personal blog, where he says

    It seems as if the CRU case is more of an attempted level 3 analysis with very incomplete (and cherry-picked) level 1 data, whereas the East Anglia inquiries would be a level 4 analysis. There also seems to be some malice involved in the original analysis. So the link to our paper is not that straightforward.

    Not straightforward, but I think that I’ve explained the link reasonably well at this point. The fact that the original dataset is obviously incomplete (while A&V acquired a nearly complete dataset for each of their cases and still found the electronic records “deeply unreliable”) only strengthens my argument. As does the fact that the emails are clearly cherry-picked and not a representative sample (and thus self-selected). As to malice, that casts even more doubt on the emails, as we can’t know whether exculpatory emails were purged from the record or not.

    After all, if Jones sent an email that said “Please ignore that deletion email. Getting over the flu here and wasn’t thinking clearly.” and it wasn’t included in the record, that would radically change the nature of what we think we know about the emails. And the only way to know would be for the victims of the release to expose themselves even more.

    [NOTE: I attempted to post this comment at CA twice in response to MrPete’s concerns. It has yet to show up, presumably because the links meant it is in McIntyre’s spam trap. 8:07 AM June 19, 2010]

  62. I got to this point in the OP:

    Specifically, Jorge Aranda and Gina Venolia wrote a paper titled The Secret Life of Bugs: Going Past the Errors and Omissions in Software Repositories that was published in the Proceedings of the 31st International Conference on Software Engineering. It reports on research the authors did on the reliability of electronic records like software bug databases. However, their methods and conclusions have a much broader application to the question of the reliability of all research that is based exclusively on electronic records like the published CRU emails.

    Absolute, f****** rot.

    As a software engineer and manager of around 25 years experience I can assure you there is a world of difference between the recording of bugs, and email retention.

    It is the task of Sisyphus to get programmers (and support staff) to record bugs in databases. Email on the other hand is almost impossible to destroy.

    The entire basis of this “analysis” has no more substance to it than the fairy floss MBA’s like to kid their managers with.

    Really. You should know better.

    • There is certainly a world of difference between bug recording databases and email, JM, and I haven’t claimed otherwise. However, if you move beyond that into the world of email communications (which I did explicitly in the post as well as in several comments), there’s not so much of a difference any more.

      I think you’ve made a few errors here. First, you’ve equated communication retention with database entry, and the two are not the same thing. Second, you’ve misunderstood what I’ve written and apparently not read or understood the many comments above where I further explain my rationale. Third, you’re using a logical fallacy (appeal to authority, namely your own) to argue that my use of the A&V paper is improper, but you have offered nothing based in evidence or logic to support your position.

      If you have specific examples of why I’m wrong, by all means bring them up and I’ll address them as best I can.

  63. Brian, (sorry for delays… I don’t actually live online much 🙂 )
    You wrote:

    The bulk of the useful information in both emails and bug trackers are human-entered and thus subject to human error.

    There’s a big difference.
    The question for a database used as evidence about bug efforts: was the information entered accurately?
    The question for emails used as evidence about communication: did they really say this?
    For the former, maybe yes, maybe no.
    For the latter, of course, because the email is a 100% accurate record of what the email says.

    A perfectly apropos analogy: recorded conversations, as with the original Watergate scandal. Just as recorded conversations are valid records of those conversations, so too emails are valid records of those emails.

    You also suggest

    In this case, they’re investigating whether an automated analysis of the emails produced a correct record

    Yet in the case of Climategate, we are human beings reading the emails. We are not automatons. There is no question about automation here.

    You don’t seem to understand this, Brian, and I have no idea why. Any person who can read is able to look at certain emails and see what was said. Getting the entire context is irrelevant. You could reduce the collection down to a VERY small number of messages and still see the same problem. Because the problem is in what was said, not in the Complete Context.

    If you want a bug database equivalent:

    Once upon a time I reported a bug to a major software company, complete with demonstrations of the issue, the solution, a patch that fully solved the problem, and more. The challenge: they refused the report for six (!) years because they were unable to replicate it in their lab for some reason. Ultimately, they did fix the problem. Now: did I or did I not produce the report as claimed? My emails prove that I did. Context unnecessary.

    However, for your overall point to hold, the email communications would also have to be peripheral rather than a core communication method.

    That makes no sense.

    For my point to hold, the emails simply have to be an accurate record of what the emails say. Which of course, they are.

    This is why you are wrong. This is why Watergate is an apropos analogy. This is why Climategate is called Climategate.

  64. Brian: If you have specific examples of why I’m wrong, by all means bring them up and I’ll address them as best I can.

    Ok. Specific example from just last week. We (a fairly large, multinational financial institution with major retail presence), attempted to go live with part of a 2 year project comprising an implementation on about 36 of the 52 available weekends available in each year. Failure on any one of those weekends threatens our schedule and pushes out the completion date.

    These changes can only be installed during certain hours of the weekend (early in the morning generally) and must be backed out if they fail, because we will be unable to open for business the next day.

    We are very disciplined, and had fully tested our changes in this instance, however they failed and we backed them out.

    We consulted our experienced operational staff – there are many layers to this, and a lot of people know parts of the problem – and after 3 days managed to identify a possible reason why. There was a slightly peripheral process that occurred as part of a regular schedule just after our installation, but given certain conditions it was known to a couple of people that it carried a risk of failure. They hypothesized that it might be the cause of our problem. We’ll know this weekend when we try again.

    Was that reason documented in the various bug and fault recording databases? No. Is that reason known to the vendor of the package concerned? Possibly.

    How old is the package? Over 20 years. How reliable is the package? Very.

    Did we bother to make a detailed record in the various bug databases? No. Maybe we should, but the problem happens rarely and is reasonably well known as “folklore” to the people involved.

    Is the problem recorded in email exchanges? You bet it is. Several jobs were on the line (including mine) and there was a lot of arse covering.

    That’s the difference.

    Any statement that data available in bug tracking systems is remotely like that available in email is completely wrong-headed. Since your argument relies on elision of this point, you’re going to have put serious effort into convincing me that my long experience in this subject is purely anecdotal and of no value in rejecting your conclusion.

    In fact, you’re going to have to do a lot more that simply make the sly assertion of equivalence that you have. You have less than anecdotal evidence, you have no evidence at all.

  65. MrPete (#83): I agree that emails in general are an accurate record of what the composing party says, but I disagree that email is 100% accurate (see my detailed example below). But that’s not the question, and it’s not what I’ve been trying to point out. The real question is whether the contents of an email are or are not an accurate record of reality and/or history. A&V found that the emails generally aren’t accurate even when the complete email record is available. And the CRU emails are not even remotely a complete record, something you yourself have pointed out above.

    When we look at the CRU emails, we want to know the usual six things: who, what, when, where, how, and why. Who said what and to whom? When was it said? What was it about? How was what done, and why was it done that way? Emails are a clear record of who (the “From” field), what was said (the content), to whom (the “To” and “CC” fields), and when (the time/date tag). But emails are not necessarily a clear record of much of anything else

    Let’s take an actual CRU email to illustrate. Note that this is purely for illustration and my selection was random.

    From this email, we know several “Who” questions from the email’s header:

    From: Jonathan Overpeck <jto@u.arizona.edu>

    To: Keith Briffa <k.briffa@xxxxxxxxx.xxx>
    Subject: Re: First draft of FOD
    Date: Fri, 24 Jun 2005 11:52:25 -0600
    Cc: Eystein Jansen <eystein.jansen@xxxxxxxxx.xxx>, t.osborn@xxxxxxxxx.xxx, “Ricardo Villalba” <ricardo@xxxxxxxxx.xxx>

    We know that Overpeck wrote this email, we know he wrote it to Briffa, and that it was CCed to Eystein Jansen, Tim Osborn, and Ricardo Villalba. We also know that he sent it at 11:52:25 local time on June 24, 2005. So two “who” questions and one “when” question answered.

    Moving on into the content:

    Hi gang – I still have to weigh in on the great
    figs/text that Keith and Tim have created, but
    here’s some feedback in the meantime.

    I agree that a mean recon isn’t the thing to do.
    Let me think more before I weigh in more on the
    fig. Working to get other LAs to get their stuff
    in.

    As for the Southern Hem temperature change fig
    (and caption and a little text), I agree that you
    (Ricardo in the lead) should do it as you’ve
    proposed. We need a clear S. Hem statement, and
    although it should stress that the data are too
    few to create a reliable S Hem recon, we should
    show the data that are available. Thus, PLEASE
    proceed Ricardo on this tack. Also, can we
    include the borehole recon series from S. Africa
    and Australia (e.g., Pollack and Huang, 98)? I’m
    sure Henry Pollack would provide fast – cc Huang
    too, since he might be even faster. Keith and
    Tim, does that make sense?

    Please note that I think we can find room for the
    above, regardless, if it is compelling enough.

    As for ENSO, we will need to address for sure –
    based mainly on the more direct coral data rather
    than teleconnected (e.g., tree-ring)
    relationships. The latter don’t seem to be
    definitive enough at this time – as I think we
    discussed in China. The same holds true for
    NAO/AO/PDO etc., and I think that we (Keith and
    Tim) will need to have this in their section – in
    a appropriately short manner. I’ll provide more
    feedback on this soon, so don’t sweat it for now.

    Main thing is to go ahead on the S Hem temp
    fig/caption/short text., independent of ENSO etc
    discussions.

    Thanks, Peck

    >Eystein and Peck
    >very quick initial response – as have not seen
    >Tim today. The Figure legends with very detailed
    >explanations is at the end of the text I sent
    >you already. The forcings ARE the ones that went
    >into the models , appropriately colour coded for
    >direct comparison – it was partly the difficulty
    >of getting all of these prescribed or diagnosed
    >forcings sorted out for each model that took Tim
    >so long.The uncertainty levels are a compromise
    >that chose came up with – see description in
    >caption , but we are considering other things .
    >Will get back to re the colours. Producing a
    >mean reconstruction is not in my opinion a
    >sensible thing to do so we will have to talk
    >about this. The question of space is crucial
    >regarding the Figure and reworking needed on
    >Regional stuff Ricardo and I need to know how
    >the space is panning out , and you opinions on
    >the reative importance of a SH regional Figure
    >versus an ENSO Figure.- and what about Monsoon
    >Peck? By the way, please clarify the space re
    >the Medieval Warm Period Box. Does this have to
    >come down , thought it was short enough?
    >Keith
    >
    > At 09:03 24/06/2005, Eystein Jansen wrote:
    >>Hi Keith and Tim,
    >>Lots of thanks for your hard work.
    >>I have gone through the FOD draft and the
    >>figures. Will send comments on text later today.
    >>Here some comments on the figures.
    >>I did not see the figure captions so it is not
    >>entirely transparent to me what went into the
    >>figures, hopefully all is material that is or
    >>will be published before the end of 2005. But
    >>anyhow, I think these figures are very good and
    >>in my view give the different reconstructions,
    >>the combined uncertainty as well as
    >>reconstructions and simulations brought
    >>together. I assume you have the Moberg et al
    >>reconstruction included, but not the Oerlemans,
    >>which will be treated in Ch. 4 (needs a x-ref).
    >>Concerning the way of displaying the
    >>uncertainties, it is not transparent to me how
    >>the white and grey areas are produced. Would it
    >>be viable to make a single curve of the mean of
    >>the reconstructions to accompany the
    >>simulations? The white area underlying the
    >>simulations seem a bit weak, in the sence that
    >>a superficial reader might wonder if it
    >>displays something without content, perhaps a
    >>different shade or colour would be better.
    >>Conserning the simulations, it needs to be
    >>clarified that the simulations did not
    >>necessarily use the forcings displayed above,
    >>hence it may be misleading to place the
    >>forcings and simulations into the same figure.
    >>Concerning the forcings, I am a bit surprised
    >>that the amplitude of these are so close to
    >>each other. Although I haven

    So let’s look further at the six questions:

    Who
    We can surmise by reading and analyzing the email that Overpeck probably intended all four recipients to get this email (Briffa plus the three CCed scientists) from his use of the phrase “Hi gang,” and we can also hypothesize that, at this point in the email thread, at least three of them are actually involved in the discussion – Jansen wrote specifically to Briffa and Osborn, Briffa responded to Overpeck and Jansen, and Overpeck responded. Nothing yet from Villalba, but that may show up in other emails on this thread. However, if we looked at those other emails and found that Villalba never contributed to the thread, then there’s not enough context to draw any but the most basic of conclusions – that he was copied in the email and it was because of writing space (number of inches/pages of text) concerns.

    We know that Briffa said in his response to Jansen “The uncertainty levels are a compromise that chose came up with – see description in caption , but we are considering other things,” but we don’t know who came up with the compromise – “chose” isn’t a name, it’s not capitalized like an acronym, and it may be simply horrible grammar. But if the other CRU emails don’t contain the list of who developed the particular figure’s color scheme, the only way to know this would be to consult emails that are not included in the CRU record – meeting lists, phone records, the complete email record, IPCC comments, Briffa himself, etc.

    What
    We know from the email (and some additional external context about acronyms and what the email’s about) that Jansen is providing his editorial feedback on some portion of the first-order draft of the IPCC AR4 WG1, specifically figures. Unless the figure is identified in other emails, however, we don’t know what figures he’s commenting on, and we might not be able to answer the question even if we consult the WG1 report. We might have to ask Jansen which figure(s) he was specifically commenting on.

    We know that Jansen’s original post (at the bottom of the content) is cut off, but is that a result of hitting “send” too early, Briffa or Overpeck trimming unnecessary text, an error at eastangliaemails.com, or something else? If it was hitting “send” too early, then there’s no indication in the emails of this (such as an email starting with an “oops – hit send too soon. Back to my comments….” And yes, I looked.) – we’d have to ask Jansen or the people to whom he sent his email what happened in order to know. Furthermore, we don’t know that the email contains a complete copy of the originals that Jansen or Briffa sent – it’s pretty common to trim copied responses to the parts that seem to be most relevant to the discussion at hand. And neither the existence of trimming nor the judgment call that went into performing any trimming can be ascertained without access to sources outside the CRU emails.

    We know that Jansen was going to respond with comments on the FOD text later in the day on June 24, 2005 because he said “Will send comments on text later today.” However, there is nothing in the CRU emails on those comments – I read the next few days worth of emails and searched several variants of this email’s subject phrase and found nothing on the comments to the text. If Jansen’s comments on the text were potentially important, we’d need to search beyond the CRU emails to find answers.

    When
    While we know when this email was sent, we don’t know when Briffa’s response was sent except that it was sometime between the original sent by Jansen at 9:03 AM (local) on June 24, 2005 and Overpeck’s response at 11:52:25 local on the same day) – this can be inferred by the email response structure. I checked the CRU emails, and no email matching Briffa’s response is in the record. If the precise response was important, then the only way to get this would be to consult information outside the CRU emails.

    From these examples, I think I’ve illustrated clearly that emails don’t necessarily represent reality, that incomplete email records aren’t necessarily a perfect record of what was said, and that there is likely context and data missing from an incomplete email record.

    You’re correct that people are not automated, and that’s where A&V draw their Level 3 analysis – human sense-making. But as Mosher points out, the CRU emails were likely selected using automation via keyword analysis (probably Medieval Warm Period for the email above) and/or author filtering. You also agreed with this when you said “The data collection under comparison here (Climategate) was not randomly selected by any means (emphasis original).” That does bring in the issue of automation because the automation performed the first analysis – filtering the complete CRU email record of probably millions of emails down to ~1100 emails total. Filtering by keyword is automated analysis, and A&V found that email records created using automation failed to present a complete, accurate, or reliable record of the real history of the bug.

  66. JM (#84): Thank you for providing a concrete example. I probably should have done the same myself before my response to MrPete immediately prior to this one, as it might have short-circuited some of the (long) discussion above. I have a few rhetorical questions regarding your example that will show why your email record (and likely those you’ve produced in the past) is likely insufficient to understand the history of the bug in your example.

    How much discussion of your problem took place in a conference room, by the lunch fridge, or at some other location where it wasn’t documented? I’d bet that some of it was unless your discussions were with individuals scattered around the world. Furthermore, I couldn’t determine whether those discussions occurred or what they were about unless I talked to your team about them. And even if last week’s discussion was 100% in email, I’m certain that with your 25 years of experience, you can think of examples where it wasn’t the case.

    Did the key breakthrough occur via a phone call or meeting, or was it documented initially in an email exchange? Can you think of any prior experience of yours where the key breakthrough occurred while you or someone on your team was sitting in a lab or at their desk, had an epiphany, and then ran about the office telling people verbally instead of documenting it in a recorded medium of some kind. Would I be able to know what that epiphany was without talking to you or your team just by reviewing your emails?

    Was everyone on your email distribution list involved in the bug hunt and correction, or were there some people there for CYA purposes or because you thought he or she might have input, but ultimately didn’t? And could an outsider be able to determine, just from looking at the emails about this bug, which people were there for CYA purposes and which were there for technical reasons? I personally doubt it.

    Did you or anyone involved in your discussions trim away quoted text in a response because it wasn’t relevant anymore? This represents a change to the email record for which there might be enough context to understand, but there might well not be enough context too. The best way to understand when things like this happen is to ask the person who did the trimming involved, not trust the email record to explain it.

    Did the discussion topic morph enough that the subject line should have been changed, but wasn’t? If so, then there’s an excellent chance that a subject line filter looking for emails on the new topic wouldn’t catch the fact that the emails were related, missing an entire relevant email thread. And someone else doing a keyword filter on the new topic would find the latter portion of the thread but might not think to go back to the complete email thread in order to understand how the new topic came up in the first place.

    Finally, if I took 0.1% of your emails that I’d searched with a keyword and/or author filter, do you really think I’d be able to make heads or tails of what your team actually did and, most importantly, why you did it?

    I’m an EE with a master’s degree who has spent a little time in academia and has 12 years of experience after earning my degree. I’ve worked in three different industries and for companies with as few as five engineers and as many as 1000+. Over the course of my career, I’ve done everything I just asked you and more. I’ve had “eureka” moments in the lab and discussed them in person instead of via email. I’ve included people on distribution that I shouldn’t have, didn’t bother to identify why the person was on distribution, put the wrong “John Doe” on distribution, and failed to include people who needed to be on distribution. I’ve trimmed away excess quoted text in my replies because it was unnecessary, irrelevant, or just so long that I wasn’t willing to send a 500 line email just to add my 10 line response. I’ve failed to change the subject of my email because I was lazy or because I didn’t even think about it until someone else asked “Hey, we’ve wandered far afield – maybe we should rename this subject?” I’ve had status meetings in labs while we screamed toward the project drop-dead date and problem-solving sessions where the only documentation was a white board covered in equations, circuit diagrams, and the words “DO NOT ERASE!” written in bright red Expo ink. And there’s not a single company I’ve worked for that I’d say you could understand anything by taking 0.1% of my emails and attempting to read them.

    My point has never been that emails = bug databases, JM. Certain aspects of emails have direct correlation with certain aspects of bug databases, but in total, the two aren’t the same. That’s why I drew my A&V comparisons to their Level 2 analysis and why I stick to it even though Aranda’s comment above says that I’m actually looking at a Level 1 instead.

    There is a real and complete history to the bug in your example. But that real and complete history isn’t contained in just your team’s email records. There is a real and complete history to what’s going on in the CRU emails, but it’s also not contained in just the released emails or even in the total archive of all the CRU emails between 1996 and 2009. To truly understand the history of your bug, an outside reviewer would have to sit down with the complete email archive and then take input from your team. To truly understand the history contained in the CRU emails, an outside reviewer would have to sit down with the complete email archive and then get input from both the critics and the scientists as well as any other.

  67. Brian, you’re still avoiding the question. Or perhaps there is such a fundamental disagreement that you are unable and unwilling to address the question.

    You’ve never responded to my repeated introduction of an apropos analogy: recorded conversations.

    You say,
    “The real question is whether the contents of an email are or are not an accurate record of reality and/or history. ”

    And you go on at length to demonstrate that an email is not a comprehensive record of the larger reality and/or history. Of course, I agree with that statement. HOWEVER, I have also demonstrated exactly why the larger reality and/or history doesn’t matter one bit.

    As long as the recorded conversation accurately records that conversation (and you agree that it does), then under a variety of circumstances, that’s all we need. We know they said it. Done.

    Just like with Watergate.

    To refute email as a valid record for Climategate participants, you need to refute the exactly equivalent analogy: audio-conversation recordings during Watergate as a valid record of what was said then.

    The recorded Watergate conversations were an incomplete record of some conversations that took place in a certain physical location, among certain parties. All kinds of other things were said at other times in other places, very pertinent to the subject at hand. In your view, why were those valid, court-admissable evidence while this should not be?

    This is a very specific example of why you are wrong. Please address it.

  68. MrPete (#): The disagreement isn’t so much with your analogy (although I have problems with that too), it’s that I reject your claim that the larger reality doesn’t matter in this case, or in any case for that matter. As I’ve illustrated repeatedly and at length, just because we know someone wrote something, that’s not enough except in the most unusual of circumstances. There’s a reason that criminal prosecutions rely on multiple lines of evidence – because one line of evidence usually isn’t enough to dispel “reasonable doubt.” In fact, the only time I know of that anyone has ever been convicted based on a single thread of evidence was in the case of a confession, and given the problems with confessions, it is my understanding that prosecutors are required to be able to prove that the accused person actually committed the crime independently of the confession.

    In the case of the CRU emails, there are certainly some emails that look bad. But without multiple lines of evidence illustrating that the content of the emails actually IS bad, there remains the possibility that the emails are out of context. What the A&V study shows is that filtered/selected emails are “deeply unreliable” sources of information, and as a result the likelihood that the emails are out of context is pretty high. That doesn’t mean that they can’t be used, only that they cannot be relied upon as the only line of evidence – real people have to sit down with the emails and attempt to make sense of them. Not only that, but the A&V study showed that the only way to be sure you’ve got the complete story is to talk to all the people involved. That means talking to Mann, Briffa, Jones, et al. and to date, the only people who have actually done this are the inquiries.

    Therefore, the inquiries, though flawed in some respects, have the greatest chance of accurately representing what really happened, who was really involved, how it transpired, and most importantly why it happened at all.

  69. That doesn’t mean that they can’t be used, only that they cannot be relied upon as the only line of evidence – real people have to sit down with the emails and attempt to make sense of them….and given the problems with confessions, it is my understanding that prosecutors are required to be able to prove that the accused person actually committed the crime independently of the confession. … Therefore, the inquiries, though flawed in some respects, have the greatest chance of accurately representing what really happened, who was really involved, how it transpired, and most importantly why it happened at all.

    Your line of reasoning is inverted in several ways:

    1) Confessions generally need to be validated against objective facts, not the other way around. In this case, we have both objective facts (recorded conversation in email) and confession (Jones).

    2) You seem to think that real people have not sat down to evaluate the emails. The disputed issues are statistical issues. Steve McIntyre is generally acknowledged to have more expertise in the statistics of these areas of climate science than just about anyone else. So we’ve got expert independent testimony about the subject at hand. (This by the way is why McIntyre is so feared… he has proven endless times that even without detailed information about data and methods, he can decipher what was done. He’s even been able to look at a proxy graph and accurately guess most of the underlying data that was used. cf his older postings from 2006-7)

    3) You claim “the inquiries have the greatest chance of accurately representing what really happened”. Yet the inquiries are all abject failures: so far, they all have ignored the issues at hand, ignored the expert testimony of those who have the stats/math expertise to independently demonstrate what happened, and have mostly provided a non-critical “pass” to the claims of those who were involved. Can you think of any court of law that would allow this? Any competent lawyer, let alone scientist (not involved in this) would cringe.

    You’re moving the thimble.

    I know you don’t like my analogy, but you’ve failed utterly to demonstrate in any way that what is said in the emails fails any test of veracity. Failed utterly to demonstrate that there’s ANY evidence that there’s another interpretation.

    All you are doing is adding FUD to provide comfort to those who want to get away with it.

    If you actually cared about good science, you would be seeking the most stringent possible inquiry to discover how such shenanigans could be allowed in the halls of academia, journals, international projects, etc.

    Instead, you are allowing your own political/policy biases to enter in… which is exactly the fundamental problem here. Science should not be poisoned by politics or policy. Science is what it is… there should be zero pressure on scientists to display any more confidence than what is represented by their data and models. This whole push for “consensus” is about as anti-scientific as it gets.

    Honestly, I’m sorry for you, because you can’t even see how your representations here are damaging your long-term ability to understand and report on real science.

    The bottom line remains: these emails, in themselves, are stark evidence of a serious problem. No further interpretation necessary. You have failed to provide a shred of evidence they are “deeply unreliable” as you claim. You only hope they are deeply unreliable.

    The political or policy outcome of all this doesn’t matter a whit when it comes to the truth. The truth is what it is.

    In that sense, we’ll have to agree to disagree: I care about the truth, you care about ensuring the “greater context” prevails… as if 99 good days could outweigh one bad day. It doesn’t work that way for those who falsify results (as was done here) or do something even more serious such as a criminal act. If it’s wrong, it’s wrong. Period.

    Have a great weekend.

  70. MrPete – I had a pretty good weekend, thanks. I hope yours was decent as well.

    We agree with respect to confessions needing to be validated against objective facts, and I said as much (although admittedly not as clearly) in the very line you say I got wrong.

    However, I think you’re perilously close to constructing a straw man. Over the course of this long argument, I’ve never once claimed that real people hadn’t sat down to make sense of the emails. Doing so would have been preposterous given there are two books on the subject, hundreds if not thousands of blog posts about it, multiple groups that have read every single email in the archive, and thus far three inquiries on the subject (and two more that will be complete in the next few weeks). I pointed this out multiple times, starting in the post itself.

    The disputed issues are not purely statistical issues, but also issues of professional judgment. You can do almost anything mathematically – the question is whether what’s done is justifiable, and that’s a judgment call, not a statistics issue. In addition, McIntyre isn’t the only expert and other experts vociferously disagree with some of his conclusions. However, I’m not clear on how this is relevant to our debate.

    You say that the inquiries are abject failures, but you haven’t provided any rationale as to why. Could you provide examples of the issues that need to be addressed that haven’t been or proof that the inquiries ignored expert testimony? I realize that McIntyre claims that he was ignored, but given the number of other people who disagree with his conclusions, an alternative explanation could be that his claims were weighed and found lacking. In this case the lack of evidence does not provide proof either way. And one of the Oxburgh panelists (David Hand, Professor of Statistics in the Department of Mathematics at Imperial College, London) was a statistics expert, so if the main issue was statistics, then at least one panelist on the scientific assessment had the expertise to make judgments on those issues.

    You say that I’ve failed to demonstrate that the emails fail a test of veracity or that there can be any other interpretation. Given that I am talking about the CRU emails (and email records like these) somewhat in the abstract, there’s some truth to that statement. I haven’t indicated that any particular email was untrue. But let’s review what I have done so far, and what you response has been.

    In my original post and in comments #52, #85, and #86, I demonstrated that the emails represent between 0.01% and 0.13% of the total emails sent. I argued that emails are not sufficient in-and-of-themselves to establish enough context upon which to base firm conclusions. In response, you said in #78 that this was irrelevant for several reasons: because the emails are a non-random sample, because the comparison between bug databases and email was inappropriate, and because the CRU emails are an extract from core communications instead of being peripheral to the work accomplished.

    In rebuttal to your first claim, I pointed out that your randomness argument is not directly applicable (see #80) and equates to selection or “cherry-picking” of information (see #80 again, but also #85, #88, and a brief mention in the original post). I also presented an analogy of this kind of selection to self-selected opinion polls (“SLOP”) where conclusions based on the poll can only be applied to the respondents of the poll and not to a larger population. If conclusions based on SLOP data cannot be applied beyond the respondents, then by analogy, conclusions based on SLOP-like email selections cannot be applied beyond the emails. To date, you have not rebutted this argument

    In rebuttal to your second claim, I demonstrated how Tables 2 & 3 from the A&V paper showed quantitative differences between automated analyses of electronic records including emails and human sense-making (see #80). I also used quotes from the A&V paper to describe how they specifically included email communications in their definition of “electronic records” and how they found significant qualitative differences in addition to the quantitative differences (see #80). I further pointed out that A&V described electronic traces as “deeply unreliable” (original post, #52, #85, #88) and “which were erroneous or misleading in seven of our ten cases, and incomplete and insufficient in every case” (original post).

    In rebuttal to your third claim, I pointed out that the A&V Tables illustrate that email is not necessarily the core communication method. However, I didn’t offer much by way of proof for this claim, only referring you to the tables and not offering my rationale for why they support my claim. If you look at Tables 2 & 3 (original post and #80), in at least six of the 10 cases, significant variation occurred in the numbers of agents and/or events from a Level 2 to a Level 4 analysis. This is directly representative of a difference between interviewing the participants and reviewing all available records and merely reviewing email complete records. If email were the only core communication method for, then there would be less variation because the majority of communications would be performed using email. Instead, this quantitative result suggests that email is only one of at least two “core” communication methods. In addition, the A&V paper pointed out specifically that email doesn’t capture face-to-face meetings (nor would it capture phone calls), and so A&V’s qualitative conclusions apply here as well. It’s my assertion, based on my own experiences as an EE (and that I supported in #86) that even if email is the primary communication link between climate scientists, it is highly unlikely that the complete history of what’s going on is not represented by the CRU emails.

    You countered my rebuttal to your second claim (the inappropriateness of the bug/email comparison) with a two-part argument (#83). The first part was that the email record is a perfectly accurate record of what the email says. Second, you said that my argument with respect to automation didn’t apply here because people have read the emails, because the problem is not what was done but rather what was said, and you provided a bug database example to support your claim.

    I concurred with a portion of part one of your counterargument, namely that the emails are perfectly correct with respect to what the composing party wrote. However, I disagreed with the rest of your claim and provided a long example using a randomly selected CRU email as a case study describing the many ways that emails are not perfectly correct and do not contain sufficient context to understand the real history of what the email is discussing (#85, with additional examples in #86). I also asserted that the issue is not what the emails say, but rather what really happened (aka “an accurate record of reality and/or history”), although I didn’t provide proof of this assertion. Your statement in #83 “[b]ecause the problem is in what was said, not in the Complete Context” suggests that you disagree with my assertion, yet you conceded it in #87 when you said ‘Of course, I agree with that statement.” This is inconsistent and I would appreciate knowing which you actually believe or how it is you could believe both.

    As to part two of your counterargument, I pointed out that key word or author filtering is a form of automation and automated analysis (#85), a point you have not yet rebutted. While I did not directly counter your particular bug example, I did address it in general with my response to JM (#86). In your bug email example, the question was not whether you did or did not tell the company about the bug, but rather what the bug was, how it was discovered, how to reproduce it, and why it took so long to be reproduced. This is the reality underlying the bug that matters, just as it’s the reality of what was actually done that matters with respect to the actions of CRU and related scientists. And as I demonstrated with #86, it’s that reality that the emails are insufficient to document. You have not rebutted this argument.

    I did not address one of your points, however, namely that the problem is not what was done but rather what was said. If you believe this (and given the inconsistency I identified above, you may or may not), then you believe something that runs counter to the basic principle of free speech and that also runs counter to what I understand to be principles of behavior/law. If no crime is committed, then merely talking about committing a crime is not itself illegal (we don’t live in the world of Orwell’s 1984, after all). Only in certain specific cases is speech itself considered illegal, such as in the case of incitement of others to commit a crime. The Jones “delete all your emails” email itself may or may not qualify in this regard, depending on the exact details of the applicable law(s). But the email from Santer where he said he wanted to beat up McIntyre was an intemperate statement at worst and not an example of unethical or criminal behavior. Ultimately, unethical or criminal behavior is not an issue of what is said, but rather what is done, and not only have I demonstrated that the emails are an unreliable record of what was done for multiple reasons, but you have conceded this point.

    In summary, I’ve now rebutted every point you’ve made, in some cases repeatedly. To date, you have not rebutted most of my arguments, have made inconsistent statements yourself, and continue to assert that the issue is one of email veracity even though I’ve specifically said it’s not and you’ve conceded that point.

    As for your claims about consensus, my lack of care about good science, and my ability (or lack thereof) to understand real science, they are all distractions from the fundamental point at hand – that there is not sufficient context within the cherry-picked CRU emails to determine most wrongdoing (as opposed to “wrongsaying”), that you have failed to argue your case otherwise, and that the three groups best equipped to answer questions of wrongdoing have all concluded that there was no wrongdoing except in one case (FOI).

  71. >MrPete, June 26, 2010 at 7:10 pm :
    >The bottom line remains: these emails, in themselves, are stark evidence of a serious problem.

    Could you point out these particular emails please? After reviewing the emails themselves by far the worse things I could find was an email asking to delete other emails and Professor Phil Jones not meeting some FOI requests. While inappropriate his actions hardly reverberate around the halls of climate science. I find it interesting that my un-expert opinion on these emails largely matches the findings of all the investigations to date.
    How is this?

    So if we take the email asking to delete some other emails it may shed light on the discussion above. We have the fact that this email exists and we can presume that Professor Phil Jones typed it. The question is, were any emails deleted? The leaked set of emails does not provide the answer to this and maybe the complete set does not either.

  72. Equally, inclusion of information from the full email database could vindicate the deniers.

    • It’s possible, yes, but not probable. The fullest accounting of the emails to date, the ICCER, found that the various FOI concerns was real and that Jones should have better explained how he generated his WMO graph. However, none of the other issues were found to be correct when placed in context. See this S&R post for more information.

  73. Brian Angliss, in listing your qualifications you left out that you can write and make yourself understood, and that you are extremely patient and keep your temper.

    RealClimate is a choice site because of the collective expertise of its lead authors, and because a lot of good people weigh in on the comments and if one reads carefully and checks out many of the links, one can learn a lot. It is not, however, the only place that thorough investigations of the investigations have taken place; there are many of these, and they are quite expert. Gavin Schmidt is another literate writer with magnificent patience.

    When polemicists claim equal weight with experts who have spent decades learning their fields and equal respect they miss the point. Dr. McIntyre and the various arguers above put their bias first and foremost. They claim kinship with Galileo, Einstein, Feynman, and the like, but in fact the rare exception is just that – rare. Disagreement with the substance of real data and real work does not qualify the critic until he or she can prove something. So far, they have only proven that they will grasp at any and all straws to claim authority. I think Drs. Lindzen and Spencer come closest to acting like real scientists with real skepticism, but they fall a long way from the oak as they continue to fail to identify serious bugs in the emerging picture.

    In the meanwhile, real world evidence, not subject to expertise, is piling up.

    I think the computer analogy is a useful supplement to other information about context and cherrypicking. Thanks.