How to make a picture of the truth | thearticle

feature-image

Play all audios:

    

Mathematical modelling has been around for over a century but until now it has been of little interest to laymen. It is the present pandemic that has brought it to the public attention. I


have written before about the powers of mathematics in TheArticle. I claimed that the calculations of   William Thomson (elevated to the peerage as Lord Kelvin) concerning the laying of the


submarine cable in the 1860s between Britain and the United States, constituted the first shots in the Second Industrial Revolution. But that was different sort of mathematics, little to do


with the type of maths used for modelling. That demanded genius: a perfect understanding of physics and the formulation of its tenets by cutting edge mathematics. In contrast, the amount of


mathematics needed for modelling is fairly modest. Anyone who has hopes passing A-level mathematics could do it. What kind of processes can be modelled? Practically anything. The first paper


on modelling was published during the First World War. The author was Frederick Lanchester and he modelled battles, in particular battle casualties. Wars are of course always at the centre


of attention but modelling has many other facets. In fact, one can model anything. Once, I even modelled a government tax policy that would give them the best chance of being re-elected. And


yet the modeller’s freedom is limited. He must restrict the argument to a finite number of relationships. For example a modeller wants to investigate how the amount of married people living


in a country depends on certain factors. That number will change month by month. What does this change depend on? It might depend on the time of the year. There might be more marriages in


spring than in winter. It might depend on religion. The proportion of Catholic marriages might exceed the proportion of Protestant marriages. It might depend on age. Maybe more people get


married in their thirties than in their twenties or forties. There might be more marriages in the North than in the South. And quite definitely, we know that more marriages take place after


a war than during a war. So far we mentioned only marriages which, by definition, increase the number of married people. The number might actually go in the other direction due to divorces.


I remember that there was a year in Budapest when the number of divorces exceeded the number of marriages. If we know that marriages increase the number, and divorces reduce them, what other


phenomena need to be considered? Death, for one thing. So we must include it in the model. What else should we include? So far we had only variables like marriages, divorces and death. Then


we must introduce two new variables: the number of unmarried women and the number of unmarried men. The higher these numbers, the more likely is that a man and woman will find each other


and tie the knot. And so if you want an equation for all this, you might say: _change in the number of married people = (number of single men) x (number of single women) minus number of


divorces minus number of deaths of at least one spouse_ Models like this have a crucial role in the natural sciences. Occasionally they are wrong, as the famous case of cold fusion showed,


but those are rare exceptions. But does it matter? Much of the public believes that modelling is a dark science and can reach any conclusions it wants. Perhaps it is worth recalling the


words of Charles Babbage, the 19th century computer pioneer, who said: “Errors using inadequate data are much less than those using no data at all.” As long as all the assumptions are given,


nearly all models are useful. Provided no mathematical mistakes are made, there is no such thing as a _wrong_ model. One cannot criticise the results, one can only disagree with the


assumptions the model uses and so, by implication, disagree with the conclusions — but the conclusions cannot be regarded as _wrong_. They are as good as the assumptions behind them. When it


comes to coronavirus, a pandemic of this sort is somewhere between the science of epidemiology and the art of politics. We know that politics is a dark art and epidemiology can offer only


limited help. Data collected during past epidemics can be helpful, but we do not know how much it is applicable to Covid-19. The difficulties of setting up a model for the spread of the


virus are tremendous. Early on in the crisis, the press reported on details of models that predicted a very high number of deaths. These stories were met with incredulity and were regarded


as scare-mongering. As I said before the results depend on what is taken into account in a model. Once it is assumed that infection can spread, exponential increase is a possibility. I would


distrust a model that claims that millions of deaths in a country are impossible. It all depends on what you put into your model. And there is the rub. So little is known about Covid-19.


Marriage has a long and respectable history, and building a model is fairly straight-forward. But coronavirus is a newcomer. To build a good model you need to know the data and the


relationship between the variables. What should be known? I give below a partial list. * How many people in the UK are immune to coronavirus infection? * How many infected people had no


symptoms? * How long is the latent period? * What proportion of those who get infected die? * What is the average time between getting infected and dying? * What difference does it make if


the infected person is in hospital, in a care home or in the community? * What are the effects of race, age and gender? * Once a person recovers can he/she be infected again? * How does the


infection rate depend on the physical distance between infector and infectee? * How to determine the all-important R factor from the data? I believe that all these data that should enter the


model are missing. So how can we trust the models? We can’t. We have inadequate results based on inadequate data. That’s bad, but it’s all we have. So the government is forced to rely on


scientists who produce inadequate advice, based on inadequate data. The main danger was that the NHS would be overwhelmed. The scientists’ advice was to recommend isolation and social


distancing to delay the peak. The government accepted the advice in spite of the harm it was bound to inflict upon the economy. The NHS has not been overwhelmed. This is the point at which


politics enters the fray. When a model predicts something deeply unpleasant, say a million deaths, what do politicians do? Suppress the conclusion in order to avoid panic or to emphasise


that this is just one of the possible outcomes under certain circumstances? Authoritarian governments will choose the former option, democracies the latter. The most important political


decision  is how to relax social distancing and save the economy. The government has no easy options. They should ask the scientists how many extra deaths will occur if some of the


restrictions are lifted. Based on inadequate data they will give the best answer available. Then it is up to the government to balance the destruction of the economy against further lives


lost. That involves not just probabilities (statisticians, epidemiologists) but detailed models  (mathematicians) and moral judgments (philosophers) and at its crudest, putting values on


lives. Having listened to the best available experts they should act on what they believe to be the best compromise. Should they tell the people about the projected extra deaths? They should


— but I strongly doubt that any government will do that. Once this pandemic is over we can analyse the mistakes. Virologists are the only ones who can help in the longer run. We should


treat the researchers as royalty, fund the groups profusely, and expect them to learn all they can about Covid-19. Perhaps then, when the next pandemic comes around, the models used to


analyse the outbreak will contain enough of the necessary data and more lives — and livelihoods — will be saved.