Saving a computer with Xfce


Regular readers will know of my contempt for Ubuntu Linux‘s standard “Unity” interface. Sadly I could find no simple way to transition to the Mint distro (which keeps Ubuntu’s simplicity but ditches the abomination that is Unity) and so thought I had no choice but to live with it.

But, bluntly, Unity was making the computer I am typing this on all but unusable – it was like a trip back 15 or more years in computing performance – thrashing, long delays, the whole “run Windows 3.1 on a 640KB box” experience. Unity had to go or the laptop (a vintage and a long way from the top of the range machine – but with 2GB RAM and two Athlon TK-57 processors not quite ready for the scrap yard) had to go.

In desperation this morning I installed Xfce (Xubuntu-desktop) – I wish I had done that years ago. The computer is usable again and I get to work with a clean and entirely functional desktop.

Advertisements

Some questions about group theory


I am trying to read too many books at the moment, and one of them is Keith Devlin’s fascinating, but occasionally infuriating The Language of Mathematics: Making the Invisible Visible.

The book skates through mathematical concepts at a dizzying speed – often pausing only briefly to explain them. It is allowed to do this, it is not a text book. I wish I had read it before because it does at least explain the bare background to some concepts I have come across in the last year while reading other maths books – such as Fermat’s method of infinite descent.

But it is sometimes a little imprecise – at least I think so – in its language. And so it sometimes confuses almost as much as explains – and this is where I come to group theory.

Now, before I read Chapter 5 of the book I knew that groups were like sets, except they were not the same. And I had – while reading Higgs (another one of the too many books right now) come across the (essentially unexplained in that book) concept of the “symmetry group” when discussing sub-atomic particles. Devlin’s book brilliantly and effortless explains what groups are and does so, handily enough, through the question of transformational symmetry.

But this is where the questions begin.

Let us examine the case of a (unmarked) circle. As the book states:

The transformations that leave the circle invariant are rotations round the center (through any angle, in either direction)

Thus there are surely an infinite number of these.

The book then defines three conditions for a group (and later a fourth condition for an abelian group):

G1 For all x, y, z in G , (x * y) * z = x * (y * z) (where * is an operation)

G2 There is an element e in G such that x * e = e * x = x for all x

G3 For each element x in G , there is an element y in G such that x * y = y * x = e (where e is as in G2)

We can see that G1 is a stipulation of associativity, G2 is of the existence of an identity element and G3 of an inverse transformation.

(The condition for an abelian group is that of commutativity, ie., that x * y = y * x for all in G – but that is not particularly relevant here.)

So back to our circle. The book states that there is precisely one inverse transformation for each element – ie., that y in G3 is unique for each x . But in our circle, how can this be so? Is not each and every transformation its own inverse as well as the inverse of every other transformation? Because the unmarked circle is, by definition, exactly the same – indeed doesn’t this lead us to conclude that there is only one transformation of the circle – namely the identity transformation (ie., the one where we do nothing) ? At least it feels like a contradiction to me…

Which then takes me on to the precise case of the completely irregular shape. This has a symmetry group with a membership of  one, namely the identity transformation, but then the book (and this is, I suspect, a sloppy piece of wording that fails to take account of degenerate cases rather than what looks like a contradiction) states:

Condition G2 asserts the existence of an identity element. Such an element must be unique, for if e and i have the property expressed by G2 then … e = e * i = i

Which is surely precisely what we do have!

Now, I don’t for an instant think I have ripped a hole in the fabric of mathematics – I just think I need this explained to me a bit more clearly: so how many rotational (or reflectional) transformations are in the symmetry group of an unmarked circle and presumably there is no problem with e = i in single member groups? Can anyone help?

Update: Ian Peackcock (@iancpeacock) makes the point to me that, of course, all symmetrical transformations look the same – that is the point. So I guess that knocks down the contradiction between infinite and one – and presumably we can put down the book’s claims about identity being unique to just sloppy wording – after all if a group has a single member, it is of course unique?

Building a model


SVG Graph Illustrating Amdahl's Law
SVG Graph Illustrating Amdahl’s Law (Photo credit: Wikipedia)

Had a good discussion with my supervisor today – he essentially said, in terms, “expect to produce nonsense for 18 months” (meaning experimental results which seem not so useful). This was helpful as it made me get my worries about the last two weeks of tinkering on the edges of building the first “experiment” – a logical model of a NoC – into perspective.

The work goes on.

(And a new book arrived – Using OpenMP: Portable Shared Memory Parallel Programming – having consciously avoided “middleware” when writing my QD I decided I did actually need to know about it after all.)

Mail on Sunday forced to admit its climate change article flawed


The Mail on Sunday has published this “clarification” over last week’s claims about global warming:

 

An original version of this article sought to make the fairest updated comparison with the 0.2C warming rate stated by the IPCC in 2007.

It drew on the following sentence in the draft 2013 summary: ‘The rate of warming over the past 15 years… of 0.05C per decade is smaller than the trend since 1951, 0.12C per decade.’ This would represent a reduction in the rate of warming by a little under one half.

But critics argued that the 0.2C warming rate in the 2007 report relates only to the previous 15 years whereas the 0.12C figure in the forthcoming report relates to the half-century since 1951. They pointed out that the equivalent figure in the 2007 report was 0.13C.

This amended article compares the 0.05C per decade observed in the past 15 years with the 0.2C per decade observed in the period 1990-2005 and with the prediction that this rate per decade would continue for a further 20 years.

A  sentence saying that the IPCC now projects warming by 2035 to be between 0.4  and 1.0C, which was reproduced accurately from the leaked document, has been deleted, following representations that these figures were an IPCC typographic error.

I think it is a good thing that they are willing to admit they got it wrong – even if they are now seeking to change the goalposts in their attempt to justify their claim that scientists got it all wrong.

“Dreaming in Code” – a review


I did not actually read Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software – I listened to it as I pounded treadmills and pulled cross-trainers and so on in the gym.

1-2-3 boingmash... Mark Frauenfelder, Xeni Jar...
1-2-3 boingmash… Mark Frauenfelder, Xeni Jardin, Cory Doctorow and Mitch Kapor. (Photo credit: Wikipedia)

That ought to be a giveaway that it doesn’t actually contain any code or maths or anything else that might require some dedicated concentration, but that does not mean it is not worth reading (or listening to) if you are a programmer, manage programmers or in some way are responsible for the development or purchase of software (it is plain that few or no people at the DWP have read this book – given their travails over the “Universal Credit” project – someone should stick a copy in each minister’s red box pronto).

I have never worked as a professional software developer – though I have written code for money – but still I found this book had a lot of insight, and even manages to explain things such as the halting problem and infinite recursion in a way that non computer scientists are likely to grasp without boring those of us who know what these are.

The book is incomplete, though in that it was written before what looks like the final collapse of the project it describes – the Chandler PIM – in 2008/9 when founder Mitch Kapor withdrew. Chandler sounded like a great idea (like Universal Credit?) but, as the project drags on and on, one begins to wonder what on earth the development team were up to for most of the time they worked on it.

Well worth reading.

One problem about the audio though – I know the American fashion is to mispronounce French words and I don’t want to sound like a European prig (after all this book is about a vital technology in which the US leads the world) – but it goes too far when Albert Camus‘s name is repeatedly made to rhyme with bus!

Virtual memory and a new operating system


Block diagrams of a single AsAP processor and ...
Block diagrams of a single AsAP processor and the 6×6 AsAP 1.0 chip (Photo credit: Wikipedia)

This is going to be one of those blog posts where I attempt to clarify my thoughts by writing them down … which also means I might change my mind as I go along.

My problem is this: I have elected to, as part of my PhD, explore the prospect of building a virtual memory system for Network-on-Chip (NoC) computers. NoCs have multiple processors – perhaps 16, or 64 or (in the near future) many more, all on one piece of silicon and all connected by a packet-switched network. The numbers are important – because having that many processors (certainly a number greater than 16) means that the, so far more typical, bus-based interconnects do not work and that also means that the different processors cannot easily be told which other processor is trying to access the same slither of off-chip memory that they are after.

As a result, instead of increasing computing speed by seeing more processors crunch a problem in parallel, the danger is that computing efficiency falls off because either (A) each processor is confined to a very small patch of memory to ensure it does not interfere with other processors’ memory accesses, or (B) some very complicated and expensive (in time) logic is applied to ensure that each processor does know what accesses are being made, or (C) some combination of the above e.g., a private area which the processor can access freely and a shared area where some logic in software polices accesses.

None are perfect – (A) could limit processor numbers, (B) could be slow while (C) could be slow and also not work so well, so limiting processor numbers. So (C) is the worst of both worlds? Well, (C) is also, sort-of, my area of exploration!

Other researchers have already built a virtual memory system for another NoC, the Intel 48 core SCC. I don’t want to just repeat their work (I doubt that would impress my examiners either) in any case, so here are, roughly my thoughts:

  • There is a choice between a page-based VM and one that manages objects. As an experimental idea the choice of managing objects quite appeals – but it also seems difficult to have a system that was efficient and managed objects without that being on top of some sort of page-based system.
  • What is the priority for a VMM? To provide a shared space for the operating system and its code (too easy?), or to deliver memory to applications? Should this then be a virtual machine layer underneath the operating system? (This is what the SCC folk did – RockyVisor).
  • Given that message passing seems a better fit for NoCs than shared memory in any case – how should message passing systems integrate with a VMM? Should we go down the route advocated by the builders of the Barrelfish operating system and absolutely rule out shared memory as a basis of processor interco-operation – just using the VMM as a means of allocating memory rather than anything else? (I think, yes, probably)
  • But if the answer to the above is ‘yes’ are we sacrificing efficiency for anti-shared memory dogma? I worry we may be.

Any thoughts would be very welcome.

(I found a good – and reasonably priced – book that describes a working paging system along the way – What Makes It Page?: The Windows 7 (x64) Virtual Memory Manager).

Regression to the mean and climate change stupidity


regressionThe figure above is from one of the most important and most influential scientific papers ever published: Regression Towards Mediocrity in Hereditary Stature in volume 15 of “The Journal of the Anthropological Institute of Great Britain and Ireland” (in fact JSTOR claims it is copyright to them from 1886, but I am betting that the copyright has lapsed).

In this paper the eminent Victorian scientist Francis Galton showed that, actually, the children of taller people tended to be smaller than their parents and vice versa – an example of the phenomenon we would now call “regression towards the mean“.

Such regressions do not replace longer-term trends (you can see from the figure above that Galton estimated the average height of adult males to be just over 5 foot 8 inches – about my height – and today the figure is closer to 5 foot 10 inches), instead they are a reflection of the random noise in the system.

But although this has been understood by scientists since 1886, it seems it has yet to penetrate the Mail on Sunday – who claimed that because the retreat of arctic sea ice in 2013 did not match 2012’s all-time record the science of climate change was dead in the water.

In fact the trend, regression towards the mean notwithstanding, is pretty clear – as this ought to make clear to anyone able to read any sort of plot…

Sea ice extent

Mail on Sunday 92% wrong on climate change


GlobwarmNH
GlobwarmNH (Photo credit: Wikipedia)

Sadly, I increasingly fear, in future decades our children and their children are likely to look back on this second decade of the twenty-first century as a wasted opportunity to put the findings of science into action.

The scientific consensus on climate change is clear and stable. The only “argument” is about just how rapidly the threat is growing. Yet that argument – in fact often just a refinement of figures based on better measurements and more refined models – is being used to claim that the science is not agreed and that, somehow, it is legitimate to claim that climate change denial is good science.

The latest example has been the misreporting and misrepresentation of the leaking findings of a draft report from the Intergovernmental Panel on Climate Change (IPCC).

First of all – leaked findings of draft reports are not generally to be regarded as scientific reports – science does not work that way. Papers are reviewed and finalised for good reasons.

Secondly, misreporting renders even your attempts to manipulate such leaked draft reports as little better than chip paper.

And that is just what the Mail on Sunday – one of the leading engines of the science-denial brigade in the UK – have been caught doing.

As this excellent blog shows – the Mail on Sunday claimed that scientists were about to reduce their estimate of the long-term warming by 50% when, in fact, the reduction was about 8% (or one one-hundredth of a kelvin).

Such journalism – personally I think this sort of reportage belongs in a comic, not a serious national newspaper – poisons public debate all too often. And, without wanting to comment on the individual views of the journalist concerned here – one David Rose – it is plain there is a determined attempt by some to have ideology trump science. That cannot be allowed to stand.

 

Why, oh why, oh why?


I am writing this sitting on the toilet – I will probably be here for some time.

Not because of some minor medical emergency, but because the hotel I am staying in requires me to use a ridiculous login system to access its internet service, and while I can get the service (just about) throughout the room, it is only here that there is enough power to complete the authentication service.

Given the service is free to guests, what is the point of the authentication? Particularly, what is the point of the ludicrous “type in these four words we give you on this voucher” authentication that Bitbuzz insist on?

Is the loss to the hotel so great from having an open internet that it is worth leaving you paying customers in the sort of foul mood I am in? I doubt it.