From “The Foundations of Mathematics”

Standard

This is a fragment of a question from the first chapter of “The Foundations of Mathematics” (Amazon link):

Record whether you think the following statements are true or false:

(a) All of the numbers 2, 5, 17, 53, 97 are prime.

(b) Each of the numbers 2, 5, 17, 53, 97 is prime.

(c) Some of the numbers 2, 5, 17, 53, 97 are prime.

(d) Some of the numbers 2, 5, 17, 53, 97 are even.

 

For what-it’s-worth I thought all of these statements are true.

Delays in a binary memory tree queue

Standard

This is a bit niche, but I spent a fair bit of this weekend working out what the algorithm to calculate how long a memory request packet would take to traverse a binary tree (from a leaf to the root) was. And now I have written a Groovy script (testable in your browser at https://groovyconsole.appspot.com/script/5109475228778496 – click on ‘edit’ and then ‘run’) to calculate the results.

(I am sure that this has been done before – many times – but I couldn’t find the algorithm on a quick search and then the desire to solve this myself took over).

The problem is this: memory request packets enter the tree at a leaf, they then have to cross a series of multiplexors until they reach the root (which is where the tree interfaces with the memory supply). Each multiplexor (mux) has two inputs and one output, so taken together the muxes form a binary tree.

As request packets could arrive at the leaves of a mux at the same time, there has to be a decision procedure about which packet progresses and which is delayed. The simple choice is to favour packets on either the left or the right leaf (in my case I went with the right). The question is then what is the average and maximum delay for a request packet.

So, if you muck about with the script you’ll see that the absolute maximum delay on a 256 leaf tree (eg., on a Bluetree memory tree connected to a 256 tile NoC) is 495 cycles, while the average maximum delay (ie for a leaf in the middle of the tree) is 248 cycles.

In contrast if the load is just 1% then these figures fall to about 3 and 1.5.

There is a flaw here though – because this all assumes that no new packets enter the system while the original request packet is traversing it – but in a dynamic system, even with a load as low as 1%, this is highly unlikely – packets that enter the tree “to the right” of the original request could potentially add to the delay if they chain with packets already present.

Springer and Scala

Standard

I need to learn the Scala programming language. Unfortunately the University of York’s library does not seem to have an electronic copy of “Programming in Scala” (Amazon link) but it did have an electronic copy of  Springer’s “A Beginner’s Guide to Scala” (Amazon link). But this is a very poor book, referring back to examples that don’t exist, offering a very confusing explanation as to what object-orientation is and, to cap it all, illustrating examples with pseudo code rather than Scala itself.

Of course, it’s not the worst example you’ll find from Springer – “Unleash the System on Chip using FPGAs…” is easily the worst book I have ever seen in print (see my review on Amazon here) – and, of course they also publish many useful books and conference proceedings and so on. But they appear to have close to a monopoly on many aspects of computer science publications and use that ruthlessly.

If it wasn’t for other factors I’d suggest the European Commission needs to take a close look at them. Hardly worth it in the UK’s case these days though.

Bots for #Brexit

Standard

First thing: I think the UK’s vote to leave the European Union is a calamitous mistake. The worst in foreign policy since Suez in 1956 and quite possibly second only to Munich in the last century.

What I want to write about here, though, is the way in which that Leave campaigners (in the broadest sense) leveraged the use of Twitter bots in the campaign. A report now available on Arxiv (here) suggests that bots generated over three times as many pro-Brexit tweets (97,431) than pro-Remain messages (28,075) in a one-week period in June.

(The report also suggests a slightly higher proportion – 15.1% – of pro-Remain tweets were bot-generated than for Leave – 14.7%)

Did it matter? The paper suggests bots have “a small but strategic” impact. In a referendum of huge importance that was lost by a narrow vote that could be very important.

My personal experience was that the online field was much more important in the Scottish referendum, where the “Yes” campaign (in favour of Scotland leaving the UK) were very effective in mobilising online resources for people seeking to “research” the question.

One thing where both referendum campaigns were similar was that the pro-change campaign accused the other side of being “Project Fear” and used online resources to repeatedly reassure people that they need not fear the consequences of a Yes/Leave vote.

Happily, in Scotland, disaster was averted and so the accusation of Project Fear merely lingers. Over the EU it has now become Project I Bloody Well Told You So.

Is cosmology completely broken?

Standard

Thirty years ago, when I was taught cosmology as an undergraduate, it felt pretty much like a subject that was close to being fully described: indeed this was the era when Stephen Hawking could announce that we were close to a “theory of everything”.

In simplified form the cosmology was this: the universe (and there was just one) was created 13 billion years ago in a “Big Bang”, the echo of which we could see in the cosmic microwave background (the red shift of which allowed us to place an age of the universe itself). Since then the universe had been expanding and the key question was whether there was sufficient mass to halt this expansion (i.e. that gravitational attraction would overcome the impulse of the Bang) or would it expand for ever. Contemporary observations suggested that the universe’s mass was close to the critical value that separated these two outcomes and the big issue seemed to be getting better observations to determine this question. Beyond that, cosmology was not very open…

Core to the cosmology we were taught was a very simple yet extremely powerful idea: the cosmological principle. Namely, that at a sufficiently large scale and at the same point in time, the universe looks the same in every direction when seen from any point. In fact this idea was treated as close to axiomatic.

(Of course, without some form of cosmological principle then cosmology itself becomes pretty metaphysical – if our observations and experiments have no general validity they cannot tell us much about the universe.)

Three decades later, though, and cosmology is something of a mess. Our observations not only suggest that visible matter is a minority of all matter, but that matter (including the unseen and so far undetected “dark matter”) is a minority of the matter-energy (the two being equivalent as E=mc^2 famously tells us) and that a “dark energy” dominates and is actually accelerating the universe’s expansion.

Dark energy is little more than a term in a mathematical equation, something that reminds me all too much of phlogiston but it’s fair to say that most cosmologists are satisfied that it exists.

But not all of them.

As an excellent and highly accessible article in the New Scientist  makes clear, a number are arguing that the problem is that the cosmological principle, or at least our rigid application of it to our observations, is leading us astray. For if the universe was fundamentally lumpy and not smooth at a large scale then it could create the “illusion” of dark energy: put simply if some bits of the universe had less matter in them, then they would expand faster (or in general relativistic terms have greater negative curvature) as gravity would not hold them back – but if we did not factor for that in our interpretations of the observations we would instead assume it was a general effect that applied everywhere.

The advocates of standard cosmology do not deny that the curvature of the universe impacts the passage of the light we see when we observe it – but respond that the homogeneity of the universe at a large scale – i.e., the cosmological principle, means that the patches of negative curvature are cancelled out by the patches of positive curvature and the overall impact on our observations is neutral.

The impact of clumpyness/emptyness on our observations is called “backreaction” and key question for the black energy sceptics is whether it leaves traces in observations that we misinterpret as pointers to dark energy.

The debate, as so often in scientific research is quite brutal – if you say someone’s conclusion is “unphysical” it is pretty much like accusing them of being no good at their job…

The abstract of the paper Is there proof that backreaction of inhomogeneities is irrelevant in cosmology?:

No. In a number of papers Green and Wald argue that the standard FLRW model approximates our Universe extremely well on all scales, except close to strong field astrophysical objects. In particular, they argue that the effect of inhomogeneities on average properties of the Universe (backreaction) is irrelevant. We show that this latter claim is not valid. Specifically, we demonstrate, referring to their recent review paper, that (i) their two-dimensional example used to illustrate the fitting problem differs from the actual problem in important respects, and it assumes what is to be proven; (ii) the proof of the trace-free property of backreaction is unphysical and the theorem about it fails to be a mathematically general statement; (iii) the scheme that underlies the trace-free theorem does not involve averaging and therefore does not capture crucial non-local effects; (iv) their arguments are to a large extent coordinate-dependent, and (v) many of their criticisms of backreaction frameworks do not apply to the published definitions of these frameworks. It is therefore incorrect to infer that Green and Wald have proven a general result that addresses the essential physical questions of backreaction in cosmology.

 

Is Groovy dying?

Standard

English: Logo of the Groovy project

English: Logo of the Groovy project (Photo credit: Wikipedia)

A few years ago, on my Computer Science MSc, there was something of a mini-revolt as some of the students – already working as developers – complained that the object-orientated design course was being taught in Groovy – a JVM-based language that, in effect, is a dynamic extension of static Java. They said there were no jobs in Groovy so why were we being taught that?

I wasn’t one of them. I wasn’t (and I am not) working as a developer and so Groovy, which crossed the boundaries between Java’s imperative and Scala‘s functional approaches was interesting and powerfully expressive. But, yes, it was a bit of a niche.

I have come back to Groovy now because, for my PhD, I want to write a more powerful and dynamic NoC simulation than has proved possible in C++. Groovy has the tools – especially closures – that allow the writing of good DSLs and so was a natural choice.

But the Groovy landscape seems barren. As I write I haven’t been able to make any progress on my code because it seems a Java update broke Groovy support and, because the infrastructure for Groovy support through http://groovy-lang.org appears to have collapsed.

I have asked a question on Stack Overflow:
http://stackoverflow.com/questions/37363077/groovy-eclipse-reporting-bug-following-java-update but traffic is light.

Will affluence replace Christianity?

Standard

Map of the distribution of Christians of the world

Map of the distribution of Christians of the world (Photo credit: Wikipedia)

The current edition of the New Scientist contains a fascinating, if I think ultimately flawed, article on the rise – and putative fall – of moralising religions such as Christianity.

Nicolas Baumard, an evolutionary psychologist at the École Normale Supérieure begins by asking how did Jesus go from dying on the cross with a few dozen followers to, inside four centuries, being celebrated as the central figure in the official religion of the Roman Empire – and reasons that this is because the moralising religion of Christianity suited the evolutionary/ideological needs of the Empire’s elite at a moment of profound societal transition.

Christianity, reasons Baumard, was very different from the religions it replaced because it emphasised rewards in the afterlife for morally correct behaviour, as opposed to the material focus on the here and now of the sacrificial approach of ancients. This, he argues, reflects a process seen in evolutionary psychology: when resources are scarce organisms pursue a “fast” psychology, seeking immediate rewards, both sexual and material and eschewing longer-term approaches even if they might bring bigger rewards: if you risk dying young, you sow your wild oats quickly.

In environments where resources are more plentiful then a “slow” psychology dominates – Baumard gives the example of falling birth rates and older parents in affluent societies: babies get more care and attention in wealthier homes.

The key here, he argues, is that around 2500 years ago humans in the Eastern Mediterranean began to enjoy better, more affluent, lifestyles – as measures by the proxy of energy use the per capita usage rose from around 15,000 kcal per day to something over 20,000.

Such affluence was not evenly spread, of course, and those who could afford to practise a “slow” lifestyle were threatened materially and sexually by the continuing “fast” livers – so it suited rulers to promote an ideology and religion that encouraged “slow” living.

Christianity, argues Baumard, was not the only sign of this – the Augustan turn towards morality was another symptom.

So what do I think are the flaws of this? Well, firstly, it does not really explain the first 350 years of Christian growth. Christianity is estimated to have grown by 40% a decade in its first two centuries. There seems to be good evidence that the new religion had a wide appeal across all social strata, not simply for the well off. (I am discounting the idea that the religion grew because of divine providence – after all Islam could make exactly the same, inherently unfalsifiable, claim.)

As Baumard makes clear, exhortations of morality were nothing new – and Augustus’s claim to found a new golden age of morality and honour, strongly supported by his propagandist poets, are the most obvious example. But this ideology seems to be about suppression of revolutionary agitation after a long period of civil war and upheaval – Rome’s need for stability around the turn of the millennium was much more immediate than because of a change in long-term economics.

Then we have the present day – Baumard suggests that as affluence spreads then the need to condemn the remaining “fast” livers will decline and moralising religion will fade too. Yet the world’s most prosperous country, the United States, is significantly more religious than almost anywhere in Europe.

To borrow a term from the Marxists, the argument seems to ignore the relative autonomy of ideology: in other words this religion spread because people liked what it said as much as because it reflected a material change in circumstances.