The classic risk management problem consists of making decisions under uncertainty. Any decision, no matter how carefully considered and well informed, can err in either of two ways:
- By failing to prevent a risk from occurring
- By preventing a risk through the creation of a different risk
Sandra Bell illuminates this problem in the case of Islamic terrorism in the UK. Bell is director of the Homeland Security & Resilience Department at the Royal United Services Institute in Whitehall.
In a Wall Street Journal commentary (temporarily available to nonsubscribers), Bell says MI5 has a grim choice: target persons who turn out not to be terrorists or affiliated with them, or fail to prevent terrorist incidents. She sides with taking preventive action knowing that this will inevitably burden the innocent and result in collateral damage to future intelligence gathering:
[N]othing will change the underlying dilemma: the need for the police to act on intelligence rather than evidence to stop potentially devastating attacks. This will inevitably lead to arrests of people who may be innocent, or of suspects who cannot be convicted due to the lack of evidence. This will not only damage relations with the Muslim communities, thus making it harder to get the intelligence on which to act in the first place. It also ties up scarce resources. Likewise, failure to follow up on leads will continue to cause uproar.
In risk management parlance this is a debate over the relative weights to assign to Type I and Type II errors. A Type I error arises when MI5 arrests an innocent person. A Type II error arises when MI5 has intelligence but fails to take action against someone who subsequently commits a terrorist act. Type I errors can be driven to zero by refusing to ever make a preventive arrest. Type II errors can be driven to zero only in a perfectly effective police state. Because neither minimum is desirable, the optimum policy consists of balancing Type I and Type II errors, and this is a matter of policy judgment. Science can help illuminate the tradeoffs but it cannot produce a solution.
Bell offers a potentially useful suggestion — that MI5 “improve the public’s understanding of these problems by being more open and providing timely information.” But how, especially in cases where information disclosure itself compromises the effectiveness of anti-terrorism programs? She does not say.
More useful would be a program that does a better job of communicating to the public the inevitability of trading off Type I and Type II errors — of course, without resort to technical language. The concepts themselves are easily grasped intuitively. What MI5 needs is broad societal endorsement of an “acceptable tradeoff zone”. That is, decisions anywhere within the zone would be presumptively acceptable, and protected from after-the-fact criticism and second-guessing.
Where Bell does not aid this discussion is her concession that Type I and Type II errors constitute “mistakes.” She says the authorities “need to have the courage to acknowledge that mistakes will be made and the humility to apologize for them when they occur.” Under Bell’s reasoning, every decision has to be a mistake because it must lead to either a Type I or Type II error.
This contradicts the lessons of decades of decision-analytic research, which teaches that the propriety of a decision made under uncertainty cannot be judged solely based on its outcome. Mistakes only occur if the resulting mix of Type I and Type II errors lies outside the acceptable tradeoff zone. Bell says “the British public needs to understand that we are moving into a new, grim normality.” Encouraging every decision to be construed as a mistake by someone undermines this purpose. It enables every acceptable risk management decision to be converted into a grievance.