Updated: A day (wasted?) with Groovy

Update: After staying up late (after 2am after the clocks went forward), I found various bugs – I don’t know if these reflect underlying changes in the language or the JVM in the interim but it seems to work now (see GIF of the Game of Life below). Working on getting the jar done too so it becomes easier for others…

There has been a small revival of interest in the wonders of 8 bit computing lately and I thought now would be a good time to revivify the BINSIC project. But has all proved to be a highly frustrating waste of time.

BINSIC – BINSIC Is Not Sinclair Instruction Code – was my 2012 project to build a DSL in Groovy that would allow users of modern computers to write and run ZX80/81 BASIC. I wrote it to explore whether it was really easier to make instructive programs that explored basic mathematical and scientific problems back then.

The first problem I had was that Groovy (and this behaviour wasn’t advertised as far as I could see) simply would not run code written in block capitals. So a DSL was essentially out and I had to write something that was more (much more) like an interpreter.

But it worked, mostly, and I had some fun with it, turned it into a jar file and forgot about it.

Unfortunately the machine hosting the jar file is gone and I never got around to trying to rebuild it – until today. And now I find the code from 2012 just won’t work.

If I try to run that code I get this:

Why code that ran a few years ago won’t run now is a mystery, but replacing the empty element in the list will get it to run – a bit – and this gets me something from the Game of Life I wrote in BASIC:

But again something that worked well 8 years ago now breaks the system:

On top of all that I seem unable to build a jar file on the command line, unable to install Eclipse and unable to get NetBeans, which will let me install a Groovy plugin but not use it, to work properly.

Beards and spandrels

One of the least important ways in which the current world-wide crisis over covid-19 is going to affect many of us is the state it is going to leave our hair in. Barbers and hairdressers are closed or closing – either under orders, because custom has dried up or because concerns about staff and customer safety are forcing the decision.

Working remotely – if you are lucky enough to have work – means that personal grooming isn’t quite as important as before (hygiene, of course, is more important than ever).

So this week I didn’t shave for five days – perhaps the longest time as an adult. As a result I grew a decent amount of fur (most of it white, I’m afraid) and when I shaved it off I was set to wondering why the different bristles on different parts of the face, while generally of a similar length, were of different stiffness.

On the cheeks the hairs were softer (and all white too). While on the chin they were stiffer and on the upper lip very stiff indeed (and also dark).

I mused publicly on what selection criteria had created this:

And sure enough a biologist – my friend (Dr.) Tim Waters replied and questioned whether why I thought it might be an evolutionary adaptation at all, and referred me to this paper – The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme. If you have half an hour or a bit more to spare I really recommend it – there are a few terms in there with which I wasn’t familiar but the core argument is very accessible and the paper is brilliantly written.

Its core metaphor is of the spandrel – the triangle created by placing an arch below a straight line (or an upside-down arch above a line). The authors (Gould and Lewontin) suggest that far too many evolutionary biologists would treat what ever was used to fill the triangle as having been selected for evolutionary advantage when, actually, it’s just a by-product of a bigger selection decision (eg., to have a dome resting upon arches).

The evolutionary-adapation-above-all idea is firmly embedded in public consciousness – in large part thanks to the brilliant popularisations by Richard Dawkins – but Gould and Lewontin cut through a lot of that like a knife through butter. I’m not qualified to make a judgement on who is right here, but it’s a fascinating debate.

Progress is not the only option

The global pandemic of covid-19 is, in its way, a triumph for the scientific method: scientists warned for a long time of the danger of a pandemic caused by a novel virus and so it has come to pass.

But in the crisis we shouldn’t forget all the other issues science warns us about – and here’s something else to cheer you up: even a ‘limited’ nuclear war in (for Europeans and Americans) far off parts of the world could cause a decade of starvation.

The concept of a nuclear winter isn’t a new one – and if you’ve ever watched Threads you are unlikely to be under any illusions about just how devastating the climate collapse that would follow a full-scale nuclear exchange would be.

But even a ‘limited’ nuclear exchange between India and Pakistan – two countries which have engaged in full-scale war three times in 80 years and where incidents of military conflict are frequent – would be devastating to global food supplies according to a new study published in the Proceedings of the National Academy of Sciences in the US.

“A regional nuclear conflict would compromise global food security” is based on a scenario of 100 15 kilotonne strikes (i.e. similar in yield and numbers if two British Vanguard class submarines fired off all their missiles). They estimate that the soot from the fires created would lower the global temperature by 1.8 celsius and that this would do much more damage than a 1.8 degree warming caused by carbon dioxide, because the carbon dioxide would also encourage growth.

Their abstract reads:

A limited nuclear war between India and Pakistan could ignite fires large enough to emit more than 5 Tg of soot into the stratosphere. Climate model simulations have shown severe resulting climate perturbations with declines in global mean temperature by 1.8 °C and precipitation by 8%, for at least 5 y. Here we evaluate impacts for the global food system. Six harmonized state-of-the-art crop models show that global caloric production from maize, wheat, rice, and soybean falls by 13 (±1)%, 11 (±8)%, 3 (±5)%, and 17 (±2)% over 5 y. Total single-year losses of 12 (±4)% quadruple the largest observed historical anomaly and exceed impacts caused by historic droughts and volcanic eruptions. Colder temperatures drive losses more than changes in precipitation and solar radiation, leading to strongest impacts in temperate regions poleward of 30°N, including the United States, Europe, and China for 10 to 15 y. Integrated food trade network analyses show that domestic reserves and global trade can largely buffer the production anomaly in the first year. Persistent multiyear losses, however, would constrain domestic food availability and propagate to the Global South, especially to food-insecure countries. By year 5, maize and wheat availability would decrease by 13% globally and by more than 20% in 71 countries with a cumulative population of 1.3 billion people. In view of increasing instability in South Asia, this study shows that a regional conflict using <1% of the worldwide nuclear arsenal could have adverse consequences for global food security unmatched in modern history.

The impact would be global:

Impacts on global maize production

Why bring it up now, just as we are facing another crisis of deep and lasting significance? Because nothing breeds conflict more than internal stress in a state. The impact of covid-19 on India or Pakistan will certainly not be positive and if it pushes either state towards conflict that matters for all of us.

More than that, the pandemic should be the opportunity to drum home the point that we need to solve conflicts and problems, not just hope they will go away if we ignore them.

Coin tossing conundrum

This is from The Mathematics of Various Entertaining Subjects: Research in Recreational Math.

It left me so puzzled it took me a while to get my head around it.

It’s game – Flipping Fun – and the idea is that participants pick a set of three coin tosses (eg THH, HHH, THT and so on) and the winner is the person who has picked the first sequence that comes up.

The mind bending part of this is:

  • The coin is fair so the odds of it turning up as heads or tails on any toss are the same (1/2).
  • Thus any sequence of a given length is equally likely – i.e. if the coin is tossed ten times then HHHHHHHHHH is as likely as HTHTHTHTHT or any other sequence.
  • Despite both of the above facts some sequences of three tosses are much more likely to win than others.

To illustrate this point, think of HHT against TTT. Here HHT is much more likely to win – because once two heads have been tossed (odds 1/4) then it is only a question of waiting for a tail to come up, as tossing a further head keeps the HH sequence alive.

So, for instance, in five tosses the odds of HHT winning are 1/4 (8/32), but the odds of TTT winning inside five tosses are 5/32

What computer output is supposed to look like

Image of conv net learning to classify images of chess pieces
Conv net attempting to classify chess pieces

This month is the 41st anniversary of me coming face-to-face with a “micro-computer” for the first time – in WH Smith’s in Brent Cross. I am not truly sure how I knew what I was looking at (beyond I suppose the shop’s own signage) – because at that time not even “The Mighty Micro” – ITV’s groundbreaking (and exceptionally far-sighted) TV series had yet been broadcast, but I was instantly smitten.

If you remember the time, then you’ll recall computers were very basic and only ran BASIC (but you could still do a lot with that). Black and white (or green and white) graphics were the standard (unless you were a rich kid and owned an Apple II).

But that didn’t stop us – my brother and I got a Sinclair ZX80 in 1980 (even if you ordered early the wait was long) and started writing code straight away (there wasn’t much choice if you wanted to get some use from the device).

The best code was mathematical and computationally intensive (as far as 1KB of RAM on a board with an 8 bit 3.25MHz CPU would allow that is) yet managed to combine that with rapid screen updates – something that was difficult on a ZX80 because computation blanked the screen (a ROM update and an interrupt driver – we copied the machine code bytes into every program – later fixed that.)

So 41 years later the code I am now running – shown above – perfectly fits the bill for “proper computing”. It is a computationally intensive – essentially multiple matrix multiplications – convolutional neural network that is attempting to classify images of chess pieces of the sort commonly seen with published chess puzzles. But what I love most of all is the fast flickering digits (the nine classes) and small images (the output of the first two layers of the 50 filters that are at the heart of the network).

This is the second time I’ve had a go at this personal project and I’ve made some progress – but it’s been hard going. Most conv net users seem to have long moved on from C++ (which I am using) to Python libraries like Tensor Flow – so it’s not even that I feel I am part of a strong community here.

Lots of subtle (that’s my story and I’m sticking to it) programming traps – like the fact that the STL Maps class reorders the objects added to reflect the order of the key (sounds obvious when you say it like that – why would it not have such a lexical order?) – I had simply assuming that the entries kept the order they were added in. (This was today’s discovery).

But if it was easy to write these things then it would be no fun.