Press "Enter" to skip to content

Coronavirus: Are Decision Makers Guilty of Unpreparedness?

It was quite certain that in the face of the coronavirus epidemic, we would seek to identify people who had predicted it, because the predictive paradigm obsesses us. And it did not fail: some emerged a CIA report which contains a double page evoking the risk of a pandemic, others a video by Bill Gates also warning about this risk. No doubt we will soon be brought out by a grandmother from Lubéron who, too, has been talking about a pandemic for a while, and that she will make the headlines at 8 p.m. TF1.

Commentators’ conclusion: the epidemic was predicted, governments were warned and they did nothing! Unfortunately the story does not hold, both because an epidemic is a matter of uncertainty, it is therefore not predictable, but also because the search for someone who successfully predicted an event reflects a retrospective bias . But above all, she ignores the difficulty of making decisions under uncertainty.

Retrospective bias

We spend our lives forecasting. Every day, a deluge of more or less serious forecasts is produced in the world on just about anything and everything. And then one day an event occurs. We turn to the past and we are surprised to find someone, somewhere, who predicted it! What a genius! What a premonition! Who is he or who she is? What are its networks? What is his secret? But it is of course a retrospective bias. We forget all the false forecasts made all year round to retrospectively filter the one that corresponds to what happened and we are convinced that it was an exact forecast. This is to forget that even a stopped clock indicates the correct time twice a day.

Specialist bias

Bill Gates invests in public health through his foundation. It is normal that he announces epidemics because it is the subject on which he works. Epidemics are nothing new, they have existed since time immemorial. Let us recall, once again, that the plague killed at least a third of the European population in 1348 and the Asian flu of 1956 about two million people. It is therefore normal for someone who works on epidemics to be sensitive to this risk, as it is normal for a firefighter to report fires and a crime police officer. Is this a prediction? No.

A prediction would be to describe in advance what will happen and when, which Bill Gates did not do , for the simple reason that it is impossible. To evoke a possibility, a risk that something of the order of the possible could happen is not useless, on the contrary. The important thing in the role of an expert, to use the terms of the researcher Paul Saffo, is indeed to define a “cone of uncertainty”, ie to delimit the field of the possible, even relatively broad , and on that basis provide the decision maker with “what he needs to know to act sensibly in the present.” “

Forecast production: a very political game

The production of forecasts is not only the work of experts worried about their area of ​​specialty. It also serves very specific interests, including government bodies and public agencies. The fear of a government agency is indeed that it can be criticized for not having warned the decision maker of a possible risk. And so, belt and suspender, we are very careful that any possible risk is transmitted to the decision maker, who is therefore under a deluge of forecasts.

This is why the analysis of reports from organizations like the CIA is always interesting when we look at the way they try to cover themselves, in particular with the well-described technique of footnotes: we envisage a scenario in the text, and we put a footnote to say that a contrary scenario is possible, so that we are well covered whatever the evolution.

Likewise, a very popular exercise among economists, for example, is to predict “The next crisis”. Since there always ends up being a crisis, in one way or another, there is a real advantage in positioning oneself in this way to benefit from the annuity granted to those who have “got it right”. It doesn’t matter that the crisis is rarely due to what the expert said.

But it goes further. Sociologist Gérald Bronner thus shows that Twitter accounts are used to produce fake news in very large numbers for this specific purpose. We can indeed today automatically make numerous variations around an attacker in Paris, such as for example: “Shootout at Saint-Lazare station, 15 dead”, “Shootout at Saint-Lazare station, 135 dead”, “Bomb at Saint-Lazare station, 15 dead”, etc.

We can, every hour that passes, produce ten thousand variations around the concept (we vary the place, the nature of the attack, the number of dead, etc.) 99.99999% of these tweets will be lost because nothing like of course will not happen, but their number is such that one day an event will correspond to one of these millions of tweets, and we can then say “Someone planned it!” “Or worse” Something is being hidden from us. Thousands of arrows are shot hoping that one will hit its target.

The decision maker dilemma

However, we can always criticize the decision maker for not having acted despite the warnings of the experts. This is the classic “Warner / Warnee problem” mentioned in strategic surprise situations, ie the case where the expert claims to have given the right information but has been ignored by the decision maker. If one takes the point of view of the decision maker, however, the situation is complicated: his daily life is made up of warnings in all areas. The real difficulty is therefore for him (or her) to choose among all these announced catastrophes which one he will treat in priority, because he cannot of course treat them all. He will do it according to what he judgesimportant, for him, or for his institution, or for his country, in short he will do it according to his mental model, ie his beliefs and his values. He has no choice but to exercise judgment.

Faced with 50 announcements of possible disasters, even imminent, at any time, there is no objective way to choose because we are in the field of uncertainty, that is to say the unprecedented for which there is no data on which to calculatewhat to choose first. Imagine a health adviser briefing the President in December on a virus that kills some elderly Chinese in a little-known province of China. We are in the middle of a transport strike, the country is at a standstill, the yellow vests have been ransacking downtown for over a year, the police are exhausted, the opposition accuses the President of fascism or laxity (it is according to), not to mention the threats of attacks. Anyway, so many emergencies, you have to choose well. Paper masks in all of this ??? The President is not the only decision maker, but this type of situation is found at all levels.

Faced with uncertainty: act without predicting

A major event like the coronavirus epidemic has a dual reality: it corresponds to something known on many levels (epidemics have been with us since the beginning of time, we know what a virus is and how it is transmitted ) but its emergence is due to uncertainty: it is not possible to predict when the next epidemic will start, or what its scale will be. The question is therefore not to try to predict them, but rather to implement means to detect and treat them quickly. Using the Saffo definition, the expert can give the decision maker enough information in advance to allow a decision, even if prediction is not possible.

Indeed, to act preventively, one does not need to know exactly what will happen and when. If you fear an epidemic, you can develop surveillance centers, encourage research into tests and vaccines, buy masks, train doctors, etc. Forecasting is therefore not necessary to be able to act preventively. But the fact remains that the decision maker must make choices, and that he will be accountable for his choices to the lesson givers, rallies of the 25th hour who, once the match played, will be able to feel at ease say “I told you so”.

%d bloggers like this: