The long haul


Far too much debate in the UK about responding to SARS-CoV-2 has been about short-termist responses. So, for instance, just as every lockdown has begun to have real effect it has been lifted in the name of the economy.

The result has been we are now in the longest lock-down of all, we’ve got the deepest economic setback of any major economy (though obviously there are other things going on to cause that too) and we have one of the highest death rates in the world. Not good.

It ought to be becoming clearer to more people that, actually, even after a mass vaccination programme (the one area where the UK has, thankfully, done well), the virus will not be gone from our lives. I don’t think there will, in my lifetime, be a return to what was fully “normal” as recently as December 2019.

Over time we can expect, as a species, to see the threat from the virus diminish as, like the common cold, more children who catch it while young grow to adulthood with a fully primed immune system. For the rest of us there will be vaccines – and there will also be mutations that may threaten our vaccine-acquired immunity.

We cannot stop dangerous mutations arising – so long as the virus is in circulation it will mutate and, if a mutation improves the virus’s ability to evade vaccines, those mutations will spread.

We can, though, slow the speed of the spread of any mutation through – you guessed it – social distancing, mask wearing and test-and-trace protocols. So these may eventually be relaxed as vaccination reaches more and more people, but it is hard to see them ever going away completely. I don’t expect you are going to be let into a hospital without wearing a mask for very many years to come, for instance.

Once we come to terms with the fact that we are here for the long haul we need to start reordering our society in that light. One of the things that surely must follow is some form of immunity/vaccination passport.

Until recently I thought this was a terrible idea – but since I recognised the truly long-term nature of the threat I have come to see such passports as inevitable, and necessary, and so the key issue is how they are introduced and used.

My initial thoughts are that firstly they should be a citizen’s right – everyone should be able to get one and access shouldn’t depend on wealth.

Secondly they should be regulated to an international standard that, as far as is practical, protects privacy and avoids unnecessary state monitoring. Or to be more direct: if the Russian (or any other) state wants to insist its citizens carry the equivalent of an electronic tag with them everywhere there isn’t much we can do to stop it, but we could say such devices are not recognised for use here.

Thirdly – and related to the first point – with the obligation to have one should come the right to access services. Public bodies or other service providers might have legitimate reasons to restrict access to those who have been vaccinated or are otherwise certificated, but they should not be able to refuse access to anyone who meets the criteria either. In other words if your body or company requires access to the information the passport contains then it must also submit to the responsibilities that come with it.

After neo-liberalism: back to Bell Labs?


In general I hate the term “neo-liberal” – as in the last decade it has become a fashionable way for some people on the left to say “things I don’t like” whilst giving their (often irrational) personal preferences a patina of intellectual credibility.

Glen O’Hara looks at the accusation that the last Labour government was “neoliberal” in some detail and I’m not going to reheat his arguments here, but as he says:

This rise in public spending was not only imagined in liberal terms—as a new contract between consumers and providers. For the emphasis on neoliberalism also misses the fact that the Blair agenda sought specifically to rebuild the public sphere around a new vision of a larger, more activist but more responsive and effective state. First through targets—and then, when they seemed not to deliver strong improvement, through decentralised commissioning and choice—the government sought to improve public-sector performance in a way that would be visible on the ground, and so maintain its relevance and political support.

But the term is not in itself meaningless – but personally I find it much more useful as a tool of analysis when applied to how governments across much of the western world have approached the private (and not the public) sector over the last forty years. For sure there has been privatisation too – expanding the role of the private sector, but certainly in the UK the long-term picture has not been a story of shrinking state, but of a state spending money in different ways. See the chart which plots the share of public spending as a proportion of GDP since the end of the 1950s – and notably this does not include the massive covid-19 driven spike of public spending in 2020 and 2021.

What has diminished is both state-ownership and (much more more importantly, I believe) state-partnership with key economic sectors that provide private goods and services – until, perhaps, that is, today (as I discuss below).

As my example (from the US but the argument applies more widely), let me look at AT&T in the United States. Today what was once the American Telephone and Telegraph Company is still the world’s largest telecommunications concern, but it’s a very different beast to the company of that name of forty years ago. Now it competes in a cut-throat global market, then it was a highly-regulated, privately-owned classical monopoly utility.

No doubt its break-up from 1984 onwards meant Americans got smaller phone bills (if they use land lines at all) but what has the overall balance for society been?

Reading Brian Kernighan’s UNIX: A History and a Memoir and the earlier The Idea Factory you get the impression that subjecting corporates to cut-throat competition has not all been about wins for the consumer. The “Bell System” monopoly paid for a massive research operation that delivered the transistors that made the digital age possible and the Unix that now dominates, and an awful lot else besides.

AT&Ts share holders didn’t repeat the massive windfalls seen by people who invested in Amazon twenty years ago but their stocks paid a consistent dividend and the economy in which they operated also generally grew steadily. Investors got a stable return and AT&T also had the ability to risk capital on long-term research and development.

The neoliberal revolution in the private sector has indeed given us Amazon (and Apple) and with it massive disruption that often is beneficial to humanity as a whole (think of the evaporation of poverty in much of east and south east Asia). But has it delivered fundamental advances in human knowledge of the scale and power that the older regulated capitalism did? I feel less than fully convinced.

The counter-case is, of course, in the field of bio-medicine. The enormous productive power that a globalised capitalism possesses is, even as I write this, churning out the product of the most spectacular scientific effort in all human history – vaccine against covid-19. No previous human generation has been able to do what we now believe we – for very good reasons – can: meet a newly emerged global epidemic in open combat and win.

But the story of the vaccine is also a story of partnership between state and capital. Governments have shared risk with the pharmaceutical companies but competition has also played its part – to me it suggests a future beyond a neo-liberal approach to the private sector in key industrial areas. The state should not be trying to pick winners but sharing risks and building an economic eco-structure where a balance of risks and rewards means that the aim is not to find the next 10000% return on investment but where good research can be allowed to thrive.

I know this is in danger of sounding very motherhood-and-apple-pie and we should be weary of just propping up existing market giants because they happen to be market giants. So let me also make a suggestion – imagine if the UK government decided that, instead of spending large amounts for ever on office suites from large software houses that are installed on million upon million of computers in schools, hospitals and police stations, it indicated it was willing to pay a premium price for a service contract, for say 5 – 7 years for someone who could turn one of the existing free software office suites into a world-class competitor and, more than that, it was willing to provide capital, as an active investor, in the two or three companies that could come forward with the best initial proposals?

The private sector would be shouldering much of the risk but would be aiming for a good reward (while free software’s built-in escrow mechanism would also mean that the private contractor couldn’t just take the money and ‘steal’ the outcome). Ultimately citizens (globally) could expect to see real benefits and, of course, we would hope any current monopolist would see competition coming and be incentivised to innovate further.

The missing link and closing schools


London, where I am writing this, is now perhaps the global centre of the covid19 pandemic, thanks to a mutation of the virus that has allowed it to spread more easily. This mutation may not have come into existence in the South East of England but it has certainly taken hold here, and about 2% of London’s population currently have symptomatic covid.

In response all primary and secondary schools, which were due to open tomorrow, will be effectively closed and teaching will go online.

Suddenly the availability of computing resources has become very important – because unlike the Spring lockdown, where online teaching was (generally) pretty limited, this time around the clear intention is to deliver a full curriculum – and means one terminal per pupil. But even now how many homes have multiple computers capable of handling this? If you have two children between the ages of 5 and 18, and two adults working from home it is going to be a struggle for many.

Thus this could have been the moment that low cost diskless client devices came into their own – but (unless we classify mobile phones as such) they essentially don’t exist. The conditions for their use have never been better – wireless connections are the default means of connecting to the internet and connections are fast (those of us who used to use X/Windows over 28kbit dial-up think so anyway).

Why did it not happen? Perhaps because of the fall in storage costs? If the screen and processor costs haven’t fallen as fast as RAM and disk then thin clients get proportionally more expensive. Or perhaps it’s that even the fat clients are thin these days? If you have a £115 Chrome book then it’s probably not able to act realistically as a server in the way a laptop costing six times as much might.

But it’s also down to software choices and legacies. We live in the Unix age now – Android mobile phones and Mac OSX machines as well as Linux devices are all running some version of an operating system that was born out of an explicit desire to create an effective means to share time and resources across multiple users. But we are also still living in the Microsoft Windows era too – and although Windows has been able to support multiple simultaneous users for many years now, few people recognise that, and even fewer know how to activate it (especially as it has been marketed as an add-on and not the build in feature we see with Unix). We (as in the public at large) just don’t think in terms of getting a single, more powerful, device and running client machines on top of it – indeed most users run away at the very idea of even invoking a command line terminal so encouraging experimentation is also difficult.

Could this ever be fixed? Well, of course, the Chrome books are sort of thin clients but they tie us to the external provider and don’t liberate us to use our own resources (well not easily – there is a Linux under the covers though). Given the low cost of the cheapest Chrome books its hard to see how a challenger could make a true thin-client model work – though maybe a few educational establishments could lead the way – given pupils/students thin clients that connect to both local and central resources from the moment they are switched on?

Honesty about vaccination


I am not medically qualified or a statistician or epidemiologist so this is not expert opinion, but I cannot help but feel that there has been a lack of honesty in the public discourse about what vaccination against covid-19 will achieve.

First of all, I think the effort to find a vaccination will succeed and in doing so will be the greatest scientific achievement of humanity since Armstrong walked on the Moon. By next Spring I expect to see large numbers of people being vaccinated.

But I also think the following are quite likely to be true:

  • Vaccination – certainly no vaccine available in 2021 – will not give guaranteed immunity to the disease, but will probably either give some people protection and for others will lower the severity of the disease.
  • The first vaccinations in the field won’t be the best achievable and the cycle of research and improvement will go for years.
  • The vaccination may only be really effective for a short time – perhaps as short as six months.
  • All of this means that we are going to have to live with many of the restrictions on our lives for years to come – hopefully there will be a rapid de-escalation throughout 2021 but there will still be a base line that will quite likely require social distancing in many indoor venues.
  • Lowering the infection rate will mean “test and trace” finally starts to work.

The other point to note is that Covid-19 is the third novel corona virus of the 21st century – we were luckier with the other two, SARS and MERS, but unless we act to minimise the risks we should expect a fourth such disease in the next decade if not sooner.

Beards and spandrels


One of the least important ways in which the current world-wide crisis over covid-19 is going to affect many of us is the state it is going to leave our hair in. Barbers and hairdressers are closed or closing – either under orders, because custom has dried up or because concerns about staff and customer safety are forcing the decision.

Working remotely – if you are lucky enough to have work – means that personal grooming isn’t quite as important as before (hygiene, of course, is more important than ever).

So this week I didn’t shave for five days – perhaps the longest time as an adult. As a result I grew a decent amount of fur (most of it white, I’m afraid) and when I shaved it off I was set to wondering why the different bristles on different parts of the face, while generally of a similar length, were of different stiffness.

On the cheeks the hairs were softer (and all white too). While on the chin they were stiffer and on the upper lip very stiff indeed (and also dark).

I mused publicly on what selection criteria had created this:

And sure enough a biologist – my friend (Dr.) Tim Waters replied and questioned whether why I thought it might be an evolutionary adaptation at all, and referred me to this paper – The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. If you have half an hour or a bit more to spare I really recommend it – there are a few terms in there with which I wasn’t familiar but the core argument is very accessible and the paper is brilliantly written.

Its core metaphor is of the spandrel – the triangle created by placing an arch below a straight line (or an upside-down arch above a line). The authors (Gould and Lewontin) suggest that far too many evolutionary biologists would treat what ever was used to fill the triangle as having been selected for evolutionary advantage when, actually, it’s just a by-product of a bigger selection decision (eg., to have a dome resting upon arches).

The evolutionary-adapation-above-all idea is firmly embedded in public consciousness – in large part thanks to the brilliant popularisations by Richard Dawkins – but Gould and Lewontin cut through a lot of that like a knife through butter. I’m not qualified to make a judgement on who is right here, but it’s a fascinating debate.

Progress is not the only option


The global pandemic of covid-19 is, in its way, a triumph for the scientific method: scientists warned for a long time of the danger of a pandemic caused by a novel virus and so it has come to pass.

But in the crisis we shouldn’t forget all the other issues science warns us about – and here’s something else to cheer you up: even a ‘limited’ nuclear war in (for Europeans and Americans) far off parts of the world could cause a decade of starvation.

The concept of a nuclear winter isn’t a new one – and if you’ve ever watched Threads you are unlikely to be under any illusions about just how devastating the climate collapse that would follow a full-scale nuclear exchange would be.

But even a ‘limited’ nuclear exchange between India and Pakistan – two countries which have engaged in full-scale war three times in 80 years and where incidents of military conflict are frequent – would be devastating to global food supplies according to a new study published in the Proceedings of the National Academy of Sciences in the US.

“A regional nuclear conflict would compromise global food security” is based on a scenario of 100 15 kilotonne strikes (i.e. similar in yield and numbers if two British Vanguard class submarines fired off all their missiles). They estimate that the soot from the fires created would lower the global temperature by 1.8 celsius and that this would do much more damage than a 1.8 degree warming caused by carbon dioxide, because the carbon dioxide would also encourage growth.

Their abstract reads:

A limited nuclear war between India and Pakistan could ignite fires large enough to emit more than 5 Tg of soot into the stratosphere. Climate model simulations have shown severe resulting climate perturbations with declines in global mean temperature by 1.8 °C and precipitation by 8%, for at least 5 y. Here we evaluate impacts for the global food system. Six harmonized state-of-the-art crop models show that global caloric production from maize, wheat, rice, and soybean falls by 13 (±1)%, 11 (±8)%, 3 (±5)%, and 17 (±2)% over 5 y. Total single-year losses of 12 (±4)% quadruple the largest observed historical anomaly and exceed impacts caused by historic droughts and volcanic eruptions. Colder temperatures drive losses more than changes in precipitation and solar radiation, leading to strongest impacts in temperate regions poleward of 30°N, including the United States, Europe, and China for 10 to 15 y. Integrated food trade network analyses show that domestic reserves and global trade can largely buffer the production anomaly in the first year. Persistent multiyear losses, however, would constrain domestic food availability and propagate to the Global South, especially to food-insecure countries. By year 5, maize and wheat availability would decrease by 13% globally and by more than 20% in 71 countries with a cumulative population of 1.3 billion people. In view of increasing instability in South Asia, this study shows that a regional conflict using <1% of the worldwide nuclear arsenal could have adverse consequences for global food security unmatched in modern history.

The impact would be global:

Impacts on global maize production

Why bring it up now, just as we are facing another crisis of deep and lasting significance? Because nothing breeds conflict more than internal stress in a state. The impact of covid-19 on India or Pakistan will certainly not be positive and if it pushes either state towards conflict that matters for all of us.

More than that, the pandemic should be the opportunity to drum home the point that we need to solve conflicts and problems, not just hope they will go away if we ignore them.