The Need for Measurable Claims
In a project such as the proposed DGR, many of our ordinary intuitions about feasibility don’t hold. For instance, in any ordinary situation, odds of failure of one in a hundred are great. And if the odds are one in a hundred per year, while your situation only lasts for a couple of days, then that’s even better! However, if the situation is going to continue for hundreds of thousands of years, a chance of one in a hundred per year that something will go wrong becomes an absolute certainty that it will.
In addition, the long time frame makes it important to evaluate the risks accurately and conservatively: when a risk is active for hundreds of thousands of years, a difference of hundredths of a percent in the short term can make the difference between a substantial risk and an inconsequential one.
For these reasons, it is very troubling when OPG underestimates the radioactivity of a major waste component by over 90%. For the same reasons, it is totally inappropriate for OPG to simply characterize a given facet of its operation as “safe” or “low risk” and saying that there need be no worries about it. Unless we know the numerical magnitude of the risk, it’s impossible to evaluate that risk properly.
And it is essential for people reading OPG’s submission to be able to evaluate that risk – after all it’s the whole point of the process of public review and response. But if its submission evaluates a given situation only in terms it being “safe” or “low risk”, this prevents the reviewers from doing their job and subverts the whole review process. We have the review process because OPG is legally prohibited from saying “Oh, this project is just safe” and proceeding. It is not allowed to simply label all the components of its proposal “safe” either, as the public won’t be able to use the review process to check OPG’s findings.
And yet OPG’s submission is riddled with language simply telling us that it has evaluated such and such a situation and determined it to be safe, with no data that would enable us to check its work. As Jim Preston stated in his testimony before the Joint Review Panel,
We continually read “not likely to”, “not expected to”, “therefore it is not expected that”, “based on this it can be estimated”, “OPG does not believe”. What are the certainties?
<href=”http://friendsofbruce.ca/dgr/local-voices/dgr-hearing-testimony/james-and-brenda-preston/”>Jim Preston, testimony on September 26, 2013, p. 131
Even when OPG recognizes that there is an issue, it doesn’t necessarily use quantitative evaluations. For example, as the International Institute of Concern for Public Health pointed out in its submissions:
IR EIS-04-108: Request # 11 (IICPH): Table 2.8
This section refers to uncertainties in the packages for newer “hotter” pressure tube wastes that will arise from future refurbishment. Likewise, section 3.2 refers to the “hot” ends of end fittings. IICPH asked for clarification as to the use of the terms “hot” and “hotter”.
“Hot” refers to radioactivity…. “Hotter” in reference to newer pressure tube wastes indicates that these wastes will initially have higher radioactivity compared to current older refurbishment wastes since there will have been less time for radioactive decay. As such, additional shielding may be required for future station refurbishment wastes compared to the waste packages currently in use for Bruce A Unit 1 and 2 refurbishment wastes.
The terms “hot” and “hotter” to describe radiation levels in the waste are not scientific and are inappropriate. Quantitative descriptors are needed. In addition, a description of the nature of “additional shielding” is required.
IICPH submission 89441E, p. 2-3
Unless OPG says exactly how much “hotter” these new wastes are than its baseline radioactivity for waste, and how much shielding it plans to use, it’s impossible to evaluate its plans for handling these wastes. And yet the whole purpose of having it submit its plans in the first place is so that the public can make those evaluations.
Commonsense in the future is not a substitute for planning
OPG and the regulatory agencies use a well-developed vocabulary for claiming that things are safe without giving any evidence that would back up this claim. This can be seen, for instance, in OPG’s plans to implement a stormwater management pond. This pond is supposed to hold the poisonous, intensely mineral-laden groundwater from the level of the DGR, and then gradually released into the lake. Even the CNSC had problems with the original proposal, as its analysis indicated that there would still be unacceptably toxic levels of minerals in this water. The JRP directed the CNSC and OPG to work this out in Undertaking #47, and they duly declared the problem fixed.
We discuss the inadequacy of their solution in detail in our page on meaningless procedures. Here, we just want to call attention to Environment Canada’s* summary of why they now feel the proposal is all right:
We also feel that the follow-up monitoring, adaptive management measures, contingency plans and the additional recommendation that we just made will ensure additional protection of aquatic biota.
Testimony before JRP, October 11, 2013, p. 12
Adaptive management is the practice of adjusting one’s policies and procedures in the face of new experience and evidence. We of course hope that OPG would do this; for example, we would not want them to blindly continue a policy that is clearly causing a disaster. However, inasmuch as adaptive management means “we’ll figure out things better as we go along” it is not a substitute for having a robust policy in place in the first place. If “adaptive management” – the promise to take care of problems as they arise – is allowed to be an assurance of safety without such a policy, this is simply an end run around the process of review.
A lot of adaptive management is common sense, of course. Who wouldn’t want to change what they’re doing if it is disastrous, or switch to a new way of doing things that works better? All of the other project features that Environment Canada mentioned are part of adaptive management and also common sense. Their “additional recommendation”, for example, is that
The water contained in the pond will be tested and compared against discharge criteria. In the event that water quality does not meet criteria, source reduction/elimination and treatment would be applied.
Response to Undertaking 47, p. 1
We would have hoped that this would be the case in any event – that OPG would check the pondwater for toxicity before releasing it, because (hopefully) approval of the project in general does not constitute a blanket license to discharge toxic effluent into Lake Huron. Likewise, in the event of emergency we certainly would hope that OPG would develop contingency plans; however, the response to Undertaking 47 doesn’t contain any contingency plans, so it doesn’t add any new information about safety to the proposal under review. A promise to develop contingency plans in the future, as events arise, is simply another end run around the review process.
As for the “follow-up monitoring”, first of all it’s not clear whether this is anything different from just testing the effluent before releasing it from the pond. Adaptive management and common sense both require feedback to function properly, but a promise to conduct monitoring without a plan for acting on the results do not add up to a safe situation – indeed, monitoring without a plan of how to respond can be virtually meaningless.
We see this in the existing American geological repository, the Waste Isolation Pilot Plant (WIPP) in New Mexico: their air monitoring detected a radiation release on February 14, 2014, but as of a month and a half later, they still have not made public the faintest idea of what the problem is, or even how to locate and identify it. This is before they even begin to start figuring out how to fix things.
If their monitoring program had been designed from the start to support a plan of response – for instance, if their air monitors provided some information on where a radiation source is – then they would be a lot further along in knowing what to do. But it was just generic monitoring, and so hasn’t provided any helpful information other than that there’s a problem somewhere. I’m sure that the managers of the WIPP had a commitment to come up with a “contingency plan” if something like this happened. But clearly it would have been to everyone’s benefit if they had planned things through a bit more beforehand. That’s what review periods are for, not for saying “Don’t worry! If anything comes up, we’ll fix it just fine.”
Curiously, however, some of OPG’s decision processes appear to rule out monitoring of environmental effects:
Each decision tree for effects assessment on biophysical VECs [Valued Ecosystem Components] includes the possibility of a finding of “may not be significant” (the other two categories being “not significant” and “significant”). The EIS suggests that “an effect that ‘may not be significant’ is one that in the professional judgement of the specialists would not be significant; however, follow-up monitoring should be proposed (or rather ‘implemented’ in some instances) to confirm significant adverse effects do not occur”.
This is my first encounter with the concept of “may not be significant”. The explanation above leaves much to question. If an effect were to be assessed as “may not be significant”, it means not significant but monitoring is required. In other words, the confidence of the assessors is lower in making this call than when declaring an effect as “not significant” or “significant”. It suggests that monitoring is only required when a finding of “may not be significant” is made. As it turns out, no residual adverse effects were deemed to be either “significant” or “may not be significant”, with the obvious conclusion that nothing needs to be monitored.
Duinker report, p. 7
But without ongoing monitoring, how is OPG going to implement adaptive management?
“This won’t be a problem, so we needn’t mention it”
There is also the issue of OPG deciding that something isn’t a danger and therefore not including it in its documentation at all.
The Proponent chose to ignore possible CEs [cumulative effects] associated with malfunctions and accidents because they “are considered too ‘rare’ to be assessed together with those caused by normal operational activities”.
Duinker report, p. 9
Actually, as with Chernobyl, Fukushima, the WIPP, and other problematic nuclear sites, “malfunctions and accidents” are the main cause for concern. If OPG says that they are too rare for it to plan for them, that’s a sign that its planning procedure has fundamental problems; it is certainly not a valid excuse for not taking the possibility of accidents into account. It seems that the organizations responsible for the Fukushima and the WIPP didn’t consider accidents probable either; and their lack of planning to address such incidents has contributed to their trouble in responding to them.
Leaving things out of the assessment process is especially problematic because of arbitrariness with which OPG has arrived at many of its conclusions supporting such decisions. For instance, Peter Duinker was tasked by the JRP with evaluating OPG’s calculation of cumulative environmental effects. His finding was that OPG’s methods were
– not credible – the work does not adhere to what I consider to be a robust approach to determination of significance of residual adverse effects;
– not defensible – the methods include huge elements of arbitrary and indefensible professional judgements;
– unclear – the scientific basis for many professional judgements in setting category limits and decision-tree combinations was not described;
– not reliable – other expert assessors could easily come to different conclusions;
Duinker report, p. 7-8
He gives the parameters assigned to eastern white cedar as an example of arbitrariness, noting that although this cedar is part of the analysis because it is considered a VEC [Valued Ecosystem Component], it has somehow been assigned a value of “significance = not significant”, which contributes to OPG’s finding that environmental impacts on this cedar are not generally significant.
I had understood from the Consolidated Responses document (page 471) that all VECs were important – otherwise they would not be VECs and therefore not included in the assessment.
Duinker report, p. 7
Likewise, John Bredehoeft showed that the pressure data studies used by OPG gave results that were inconsistent with each other, and were unable to model the hydrogeological measurements at the DGR site while taking into account all the relevant features of the rock. We go into Duinker and Bredehoeft’s findings in more detail in our page on meaningless procedures. Here, we will simply note that the arbitrariness of OPG’s evaluation procedures leads to the safety of important features of the DGR being never looked at, for arbitrary reasons. For example, in his testimony on October 11, 2013, Alex Monem, council for the Saugeen Ojibway Nation, pointed out as a result of OPG’s assumptions, no environmental impact evaluation had been carried out on a major portion of their fishery:
We have heard Dr. Duinker express a significant lack of confidence in the cumulative effects analysis carried out by OPG, and its methodology for predicting the significance of effects. He states that these shortcomings represent a failure of OPG to meet the requirements of CEAA and the guidelines.
One of his key concerns – I understand – is that unless a residual adverse effect from the project alone was identified, no subsequent cumulative effects analysis was performed to determine how that potential effect could combine with other stressors.
It was clarified for us that this approach was applied to potential impacts on the Lake Huron aquatic environment, and as OPG had identified no adverse effects on Lake Huron, no subsequent cumulative effects analysis was performed.
This is obviously a concern to SON, particularly when taken in conjunction with the uncertainties that have been raised about the adequacy of the stormwater management pond, and perhaps the response to the undertaking we’ve just heard clarifies some of these uncertainties but we haven’t had time to assess that, and it sounds like[…] uncertainty still remains.
It is also of concern to SON that OPG has assumed in its analysis that the quality of MacPherson Bay – the MacPherson Bay aquatic habitat is poor, and as a consequence we can assume not significant to the local ecosystem.
This conclusion appears to have been based on very old data, and one more recent sampling program conducted for Bruce Power in 2007. Our technical reviewers believe that this is likely an insufficient basis for coming to these conclusions.
Testimony before JRP, October 11, 2013, p. p. 17-19
At the beginning of this essay, we mentioned that risk situations from everyday life are often not a good model for the DGR, but there is one aspect in which thinking about everyday risks is extremely cogent.
Presumably, for any situation where we’d be taking a genuine risk in real life, the benefit would be substantial and the potential bad consequences manageable. In the case of the DGR, neither of these is true. The CNSC’s own report says that the current status quo for storing the nuclear waste is safe and in no danger of running out of capacity, so there is no substantial benefit. And the potential downside of contaminating Lake Huron and/or the groundwater of northern Ontario would be catastrophic, with no mitigation possible.
It is impossible to know what the risks of this project are if OPG is permitted to hide them. The whole point of the review process is for OPG to present the data behind its judgments and decisions so that others can look at the data and give feedback. It circumvents the whole process if OPG is allowed to simply label some project features “safe”, or decide without presenting evidence that other aspects simply don’t need to be considered.