Search This Blog

Thursday, April 4, 2013

James Hansen Disappoints

I mentioned this study he did, but I never read it because the conclusions are obviously true.

I did actually read it, and it uses a risk of death per unit of power generation to reach its conclusions.

In the case of the nuclear risk factor (the only one I explored), the factor is based on a particular reactor type, particular enrichment processes, etc. and particular populations surrounding particular facilities.

It isn't appropriate to generalize that nuclear risk factor.

He also thinks that LNT may have a threshold, which is an oxymoron.


  1. > In the case of the nuclear risk factor (the only one I explored), the factor is based on a particular reactor type, particular enrichment processes, etc. and particular populations surrounding particular facilities.

    Hmm, did you have to dig as deep as ExternE?

    Is this a serious flaw in this kind of study that really talks about orders of magnitude, not precise numbers?

    BTW clearly he is an LNT denier. I don't think he misunderstands LNT, he just doesn't believe in it. But he is clearly out of his depth here, the lacking reference to BEIR (or any other review-type source) being the giveaway.

    Makes one wonder how many other things outside his comfort zone (nuclear technology?) he's also holding the wrong end of the stick on...

  2. Yes.

    I wouldn't even call this a "study", more like a back-of-the-envelope approximation.

    I think he does misunderstand LNT. Even many health physicists misunderstand it. For example, he gives 2 references to suggest a threshold. The first is awful, discussing a "non-tumor dose", the second actually supports LNT theory, though it's a bit technical understanding why:

  3. Ah yes I see. Good explanation there.

    BTW wasn't there a recent paper showing that for high doses, the clustering of several DSBs into one repair center makes misrepair more likely? I have such a vague recollection.

  4. I'm not sure what particular paper you're referring to. What is generally accepted is the decondensation model - repair centers exist within chromatin holes (the holes allow access for repair proteins. DNA is typically wound very tightly around chromatin.).

    Since the DSB's are clustered in the holes, the chance of genetic translocations increases with the number of DSB's. The number of DSB's increase with the amount of dose.

  5. > because the conclusions are obviously true

    Actually, reading more carefully and thinking it through makes this less than obvious. Especially the historical study is questionable.

    Looking at this graph shows the effect of the 1970s oil crisis: oil prices going up by 3-6 times. Being the old cynic that I am, I would assume that this was the reason the countries of the world (mainly the OECD countries) started their big expansion of nuclear power. Continuing until 1986, when it started tapering off as oil prices had been coming down again. Everywhere except in France, which is an interesting case.

    I don't think environmentalism had anything much to do with slowing down the nuclear growth after 1986 (Chernobyl may have had some impact), and, being old enough to remember, I know that it wasn't the reason for the expansion starting 1973: the real reason was energy independence, the oil crisis being a painful reminder.

    The problem with Hansen's paper is that he compares what really happened with an unrealistic alternative scenario: the one in which the nuclear expansion didn't happen and was replaced by some mix of coal and gas, and nothing else changed. But this non-expansion of nuclear must have had a cause!

    The only cause I can think of that would be somewhat realistic (and that I'm pretty sure also Hansen was thinking of), would be 'environmentalists' having the political leverage to block the 1970s nuclear expansion. But, if these folks really had had such leverage at the time, they would have used it for a lot more than just that. At the very least they would have blocked the major expansion of coal burning that happened, e.g., in the United States. Looking at the paper's Figure 2a, even a partial success in this would have matched or exceeded the 1.8M lives 'saved' by nuclear power as claimed by the paper.

    A bit outside the scope of this blog, but thought you might be interested.

  6. My "conclusions are obviously true" was meant to relate to the mortality by energy produced (Table 1). He used back-of-the-envelope gross generalizations, but because the mortality per energy differ by so much, a more careful assessment won't (and other such studies show) change the conclusions.

    Hansen was trying to show what would have happened had nuclear not happened (obviously it did) and how more nuclear/less fossil could impact the future (which might or might not happen). Why what happened did, seems secondary to the paper.

    My interpretation of history is that the nuclear expansion was halted by plant new construction cost overruns, NIMBY, HLW disposal, and TMI.

  7. I don't quite agree. Yes, Table 1 (or its corrected version) just summarizes what everybody already knows about the public-health impacts of the different power-generation technologies. That is not in question.

    The problem that I have with the paper is that it does a sleight-of-hand by presenting what looks like a policy impact study, without ever saying so -- plausible deniability, I guess. You must have missed this. But in the public discourse, the paper has been perceived and widely presented as a policy impact study, so it seems fair to analyse it as one -- what I tried to do above.

    In a proper policy impact study, you analyse what will happen if certain policies are implemented, vs. not implemented. And the policies are stated, including what drives them. That is missing here. Surely you agree that public-health impacts are but one of the things to be considered: for fossil fuels, also the greenhouse footprint (Hansen does that, in a way) and for nuclear, the proliferation risk and that of terrorist abuse. The problem with these is that they are hard to quantify. That doesn't make them unimportant.

    Precisely how much weight they should receive in a policy impact study is a judgement call, and demonising those that make that judgement call differently -- as this paper implicitly does, or enables -- is shameful and, yes, disappointing from someone like Hansen. IMHO. You may not agree, but do you at least see my point?

  8. Well if it's a policy impact study, you are right. I see your point and I agree.

    I've seen it elsewhere referred to as a simply study. It's lacks the depth of what I would call a study.

    The journal refers to it as an Article.

    Very disappointing indeed, for Hansen and the journal. But the message did make the headlines, and that may have been the real intent.