Frustrated by “Communication and Concurrency”


The author of Communication and Concurrency, Robin Milner, is sadly dead, so I cannot expect him to produce a new edition with the faults removed, but perhaps a passing computer scientist will read this and not make the mistake in their forthcoming and soon-to-be-seminal texbook.

copy agent

Because Milner’s book has many exercises but no answers to them. It cannot possibly be that he did not know the answers, so why not publish them, even if just in a brief annotated form at the back?

For those going through the book as part of an taught course in computer science then the lack of answers may not matter too much. But for those of us struggling to get it right ourselves it is deeply frustrating.

Advertisements

Situation normal, all fscked up


Why, oh why, oh why? … Hopefully someone will now tell me the rant that follows is unnecessary.

But, here goes: an update of my Raspberry Pi failed about a week ago. As I then went to Paris with the family for a few days I only got round to trying to fix it this weekend.

I managed to get the thing booting again by copying some files from the distributed boot partition into the boot partition on my RasPi, but on booting the networking failed (possibly kernel modules needed were not available as a consequence of the failed update, not sure). Anyway, one of biggest weaknesses of the RasPi is the whole flakiness of the system when it comes to power and a small error there is likely to cause corruption on the SD card – and this happened to me.

But, why on why oh why does fsck.ext4 insist on a manual check? After an hour (with a figure of Robbie Burns on the enter key) I have only got a little way through this and had to give up.

I am a grown up. If I want to risk fsck corrupting my partition that should be up to me – I should have an automatic option.

Update: fsck -y is what I needed.

Another reason why exercise keeps you younger?


Reading through a copy of the New Scientist from a few weeks back (2 February edition), I was struck by the comment in an article on the effects of sleep on the human body by Nancy Wesensten, a psychologist at the Walter Reed Army Institute of Research in Maryland:

Sleeping deteriorates like everything else does as you age… People have more difficulty falling asleep, and that could account for the cognitive decline we see in normal ageing.

Until I started a vigorous exercise regime about 16 months ago, I really did find it difficult to fall asleep. Since then, while I don’t have my partner’s ability to more or less doze off as soon as my head hits the pillow, I generally no longer have a problem.

I have often seen claims made for exercise as a means of maintaining mental acuity – perhaps there is some substance to those claims and this is the reason?

Further thoughts on “The Architecture of Complexity”


Hierarchy of digital distractions
Hierarchy of digital distractions (Photo credit: Emilie Ogez)

In his 1962 paper “The Architecture of Complexity”, Herbert Simon writes that social systems often are what he describes as “near decomposable” – by which he means (the next few are my words, not his) that paths through social hierarchies are narrow:

This is most obvious in formal organizations, where the formal authority relation connects each member of the organization with one immediate superior and with a small number of subordinates.”

Taking the argument further he compares social organisations (and society as a whole) to crystals and other physical hierarchies, remarking:

In social as in physical systems there are generally limits on the simultaneous interaction of large numbers of subsystems. In the social case, these limits are related to the fact that a human being is more nearly a serial than a parallel information-processing system. He can carry on only one conversation at a time, and although this does not limit the size of the audience to which a mass communication can be addresses, it does limit the number of people simultaneously involved in most other forms of direct interaction … one cannot, for example, enact the role of “friend” with large numbers of other people.

This means, he says, that “lower frequency dynamics” will be associated with society than with groups of friends – his parallel here is again with natural hierarchies, where he says the length of the bond or chain of interaction determines the “frequency of vibration”. In other words, the pace at which society changes is limited by the very large number of small interactions that need to propagate along large social distances.

By now I am sure most of you can see the point I am going to make: social media changes all this – as suddenly it is possible to be friends with a large number more or less simultaneously. In this case maybe we should see the events of, say, the Arab Spring, not as driven by social media because it escaped censorship but also because it radically shortened the social distance between those who were previously kept apart: atomised, to use another physical parallel.

Not a particularly original observation over the last 18 months – but worth thinking about as an alternative explanation of the power of social media from that offered by, for instance, by Clay Shirky in Here Comes Everybody: How Change Happens when People Come Together.

Shirky says the power of social media is that it lowers the cost threshold – it is not, as he cites in one example, that there were not lay Catholics concerned about sexual abuse before the internet, but that they could not afford to communicate.

But reinterpreting Simon the change brought may be more fundamental – the powerful bonds of friendship may spread as never before and the experience could be rapid and radical changes in society.

Watchmakers for evolution


Watchmaker
Watchmaker, from Wikipedia

The sheer complexity of natural organisms is often cited as a “scientific” reason why evolution must be false – as here for instance (though when this fallacy of the argument is pointed out the poster resorts to attacking people for being ‘ungodly’, therefore giving the game away) – it is simply too much to expect such random arrangements to come into being.

Reading Herbert A. Simon‘s classic 1962 paper “The Architecture of Complexity” we can see this argument get taken down using, perhaps mischievously, an analogy long favoured by creationists, that of a watchmaker.

Simon’s argument runs as follows: imagine we had two watchmakers making watches of equal quality, each with 1000 separate parts. But the two use two different methods. Watchmaker A’s watches are hierarchical in nature, she makes the watches out of hierarchies of ten pieces, which are then assembled into another hierarchy of 10 pieces, and then these 10 pieces are assembled together to make the watch. Watchmaker B takes a different approach, assembling all 1000 pieces in one ‘flat’ system.

Now when either watchmaker gets a telephone call from a potential customer they must put the piece they are working on down and restart work on it from scratch when they put the phone down.

This means when B gets a call he has to start building the whole watch again from scratch, while when A gets a call she only has to restart the piece in the hierarchy. What are the outcomes for their businesses?

Well, let us say they can add a piece every minute. Then, obviously B takes 1000 minutes to make a watch.

But A takes more time: she makes 100 pieces of 10 pieces – 1000 minutes and then a further 100 minutes to assemble these into 10 pieces and another 10 minutes to put these 10 pieces together – a total of 1110 minutes – more than 11% longer.

But, in fact, A is far more productive than B. Because every time a call comes through, A is only set back 5 minutes (plus the call time) on average (ie half an assembly), whereas B is likely to lose much more time.

Let us us say both watchmakers get a call on average every 100 minutes of manufacturing time – so the probability of getting a call in any minute is 0.01. Then A will get an average of 11.1 calls during each watch manufacture – adding 55.5 minutes (for ease of exposition we ignore that this extra time increases the chances of getting a call). So A produces a new watch every 1166 minutes or so (not enough to keep up with demand, we note, but that’s another issue).

What of B? The chances that B will complete a watch are very low indeed. The chances that B can complete a watch are in fact 0.99^{1000} – the probability that a call will not come through in 1000 minutes – roughly a 0.00004 chance. So for every 8 hours of the working day (480 minutes), on average A will output \frac{480}{1166} watches and B will output \frac{480}{1000} \times 0.00004 – in other words A will be more than 21,000 times more productive than B.

The relevance to evolution is that the creationists are arguing that natural selection follows the path of B, when obviously it follows A. Living things are hierarchies – and as they evolve they do not have to start from scratch every time.

Indeed the example shows that repeatable natural processes in general will tend to generate hierarchies because otherwise they will not be repeatable.

The paper contains a lot of other fascinating insights about the natures of hierarchies – some of which have been confounded by recent experience of social media. I may blog about some of them too in coming days.

Why I would not want to fly in a Dreamliner (yet)


A Faraday cage in operation: the woman inside ...
A Faraday cage in operation: the woman inside is protected from the electric arc by the cage. Photograph taken at the Palais de la Découverte (Discovery Palace). (Photo credit: Wikipedia)

The world’s Dreamliners are currently grounded while regulators and the manufacturer aim to sort out problems with the plane’s batteries – which supply a heavy duty electrical system that replace the more traditional (and heavier) hydraulic controls in other planes.

I imagine, and hope, that the battery problems can be sorted out – though the Lithium Ion system chosen is notorious for overheating and fire risk – or “unexpected rapid oxidisation” as an earlier (non-aviation) LiOn battery fire problem was called.

But what worries me about the planes is a different issue – their outer shell is made of plastic, again considerably lighter than traditional aircraft materials, but lacking the quality of a Faraday Cage.

The Faraday Cage effect is what makes traditional airliners (and motor cars) safe from lightning strikes – lightening represents a terrific concentration of energy but, actually, very little charge – and so when lightning strikes a sheet of metal, like a car or an airliner, the charge is spread and the strike rendered safe (in contrast poor conductors like human flesh burn up, which is what makes us so vulnerable).

Now, the Dreamliner has a metal substructure which is designed to replicate the effect of a Faraday Cage but, having read a critical piece on this in the current edition of the New Scientist, I am not convinced it has been tested enough to be reliable. Anyone who has flown through the heart of an electrical storm – as I did a few years ago coming out of Tbilisi – will understand just how essential it is that the Dreamliner’s electrical properties are fully reliable.

Update: I am a hopeless speller and, as was pointed out to me I mis-spelled ‘lightning’ throughout this the first time round. Apologies.

Two different ways to calculate r-ary Huffman Code


Calculating a binary Huffman code is easy: amalgamate the symbols in the source, two-by-two, and then reverse it to get the code.

But what of an r-ary Huffman code, of arbitary r .

Text books give two “different” methods which puzzled me at first until (not too much, I admit) thinking convinced me they were the same.

In a normal Huffman ‘compression’ we might have a source S of q elements, for a code of base r we would reduce S to S^{\prime} with q - (r - 1) elements by merging r symbols in S .

To do this properly we need to have r symbols left for the final merge. To do this then, if we start with N symbols, then 1 \equiv N mod(r - 1) .

The difference between the two methods suggested in text books is either positing additional symbols of probability zero such that 1 \equiv N mod(r-1) or ensuring the first merger is of only q symbols where (N-q) mod(r-1) = 0 .

Both, are of course, functionally similar. For instance, with N = 8 and r = 4 , then N mod(r -1) = 2, so we can begin by adding two symbols of probability zero or the first reduction should be of size s such that (N-s) mod(r-1) = 0 – and we can see that, in this case, s = 2 .