Programmatic adventures in the hyperbolic plane


Recursion is dangerous. So dangerous that in my professional life I’m explicitly banned from using it in code. After all who would want their aeroplane to stall, their car to crash or their fridge to defrost because of a stack overflow.

But it is also, most (but not all) programmers would agree, beautiful and more than a bit mysterious. Indeed one of the biggest selling popular science/mathematics books of all time centres on recursion as a possible partial explanation of human consciousness.

In the last week or so I’ve been attempting to use recursion in R to draw a tessellating pattern of squares on the conformal representation of the hyperbolic plane – an idea from M.C. Escher via Nobel laureate Sir Roger Penrose.

The code here is where I’ve got to.

#!/usr/bin/env Rscript

library(ggplot2)
library(ggforce)

basicLengths<-seq(from<-0, to<-1.0, by<-1/250000)
projLengths<-basicLengths * (2 /(1 + basicLengths * basicLengths))

getBestMatch<-function(numIn)
{
  repVec<-basicLengths[which.min(abs(numIn - projLengths))]
  return(repVec)
}


drawConformalLine <-function(startX, startY, endX, endY)
{
  #Length and angle of line to starting conformal point
  lengthToStart <- sqrt(startX * startX + startY * startY)
  angleToStart <- atan2(startY, startX)
  #And to end Point
  lengthToEnd <- sqrt(endX * endX + endY * endY)
  angleToEnd <- atan2(endY, endX)
  #Project
  newStartLength<-lengthToStart * (2 / (1 + lengthToStart * lengthToStart))
  newEndLength<-lengthToEnd * (2 / (1 + lengthToEnd * lengthToEnd))
  newStartX<-newStartLength * cos(angleToStart)
  newStartY<-newStartLength * sin(angleToStart)
  newEndX<-newEndLength * cos(angleToEnd)
  newEndY<-newEndLength * sin(angleToEnd)

  xpoints<-seq(from<-newStartX, to<-newEndX, by<-(newEndX - newStartX)/50)
  ypoints<-seq(from<-newStartY, to<-newEndY, by<-(newEndY - newStartY)/50)
  euclidLine<-data.frame(xpoints, ypoints)
  #Conformal line (reverse projection)
  lengths<-sqrt(euclidLine$xpoints * euclidLine$xpoints + euclidLine$ypoints * euclidLine$ypoints)
  newLengths<-vector()
  for (i in 1:length(lengths))
  {
    newLengths<-c(newLengths, getBestMatch(lengths[i]))
  }
  angles<-atan2(euclidLine$ypoints, euclidLine$xpoints)
  newPoints<-data.frame(xp = cos(angles) * newLengths, yp = sin(angles) * newLengths)
  return(newPoints)
}

recursiveSquares<-function(setOfPoints, base, radius, sign)
{
  if (base + radius > 0.75){
    return (setOfPoints)
  }
  setOfPoints<-rbind(setOfPoints, recursiveSquares(setOfPoints, base + radius, radius/2, sign))
  positiveCorner = (sign * base + sign * radius)
  negativeCorner = (sign * base)
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, positiveCorner, positiveCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, negativeCorner, negativeCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, negativeCorner, negativeCorner, positiveCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, positiveCorner, positiveCorner, positiveCorner))
  return (setOfPoints)
}


partitions<-100
distances<-seq(from=1, to=partitions, by=1)
distances<-exp(distances/partitions)/(2 * distances)
circles<-data.frame(x0=rep(0, partitions), y0=rep(0, partitions), r=distances)
t <- ggplot() + geom_circle(aes(x0=x0, y0=y0, r=1-r, color=rgb(r,1-r,1)), data=circles)
t <- t + geom_circle(aes(x0=0, y0=0, r=1), color="black", data=data.frame(x0=0, y0=0, r=1)) + coord_fixed()
distances<- 1-distances
n_distances<-(distances * 2)/(1 + distances * distances)
n_circles<-data.frame(x0=rep(0, partitions), y0=rep(0, partitions), r=n_distances)
y <- ggplot() + geom_circle(aes(x0=x0, y0=y0, r=r, color=rgb(r,1-r,1)), data=n_circles)
y <- y + geom_circle(aes(x0=0, y0=0, r=1), color="black", data=data.frame(x0=0, y0=0, r=1)) + coord_fixed()
square<-0.4
squarePoints<-data.frame()
squarePoints<-recursiveSquares(squarePoints, 0, square, 1)
nPoints<-data.frame()
nPoints<-recursiveSquares(nPoints, 0, square, -1)
squarePoints<-rbind(squarePoints, nPoints)
t<-t + geom_point(aes(color="magenta", x=squarePoints$xp, y=squarePoints$yp))
t<-t + geom_point(aes(color="magenta", x = -squarePoints$xp, y=squarePoints$yp)) 


I had several problems to solve, and recursion seemed a good fit for some of them, though in the end I’ve only been able to make partial progress.

Here are some of the problems and how I tackled them:

  1. Finding the straight lines

In the conformal representation of the hyperbolic plane ‘straight lines’ – in the sense of the shortest routes between two points – are actually represented by the segment of the circle joins the two points and that meets the boundary of the representation of the plane orthogonally (ie., at right angles).

For points that lie on the same polar angle in the circle that’s easy – the ‘straight line’ that joins them is simply the Euclidean line that passes through both of them and the origin (centre of the circular representation of the plane). The circle they are linked by in the conformal representation is of infinite radius.

But what about other, non-degenerate, cases? Perhaps there is an algorithmic method to work this out? But if there is I don’t know what it is and I couldn’t find it online either!

What I do know (again thanks to Sir Roger) is that is we substitute the projective representation for the conformal representation we can join the points by Euclidean lines. To go from the conformal to the projective representation of the hyperbolic plane by preserving the polar angle of points but magnifying the distance from the origin by \frac{2R^2}{R^2+r^2_c}, where R is the radius of the bounding circle (so R=1 in our case)and r_c is the length of the of the Euclidean line from the centre (origin) to the point.

The great advantage of the projective representation is that straight lines between points are just what we think of them as being in Euclidean geometry – in other words both the shortest length between two points and at a constant angle.

So to get the points on the line in the conformal representation the thing to do seemed to be to convert the points to the projective representation, calculate the straight line (very easy) and shrink everything back by the factor \frac{1+r^2_c}{2} .

But the problem with that is, that with the exception of the start and end points, we don’t know what r^2_c is to begin with – indeed that is more or less what we are trying to calculate.

My answer to that is to generate lookup tables (relatively quickly done using R’s vectorisation) and a function that indexes one to the other:

basicLengths<-seq(from<-0, to<-1.0, by<-1/250000)
projLengths<-basicLengths * (2 /(1 + basicLengths * basicLengths))

getBestMatch<-function(numIn)
{
  repVec<-basicLengths[which.min(abs(numIn - projLengths))]
  return(repVec)
}

2. Generate the squares

This is where the recursion comes in – I settled on the idea of four squares touching at each vertex, meaning each square as we travelled out from the centre the Euclidean lengths halve.

Here is (a simplified) version of the code:

recursiveSquares<-function(setOfPoints, base, radius)
{
  if (base + radius > 0.75){
    return (setOfPoints)
  }
  setOfPoints<-rbind(setOfPoints, recursiveSquares(setOfPoints, base + radius, radius/2, sign))
  positiveCorner = (base + radius)
  negativeCorner = (base)
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, positiveCorner, positiveCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, negativeCorner, negativeCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, negativeCorner, negativeCorner, positiveCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, positiveCorner, positiveCorner, positiveCorner))
  return (setOfPoints)
}

Those of you who have read the classic Structure and Interpretation of Computer Programs may immediately recognise a weakness here – by not using tail recursion I am putting more stress on the stack.

And it’s true that an alternative tail recursive form (this is the full code) works:

recursiveSquares<-function(setOfPoints, base, radius, sign)
{
  if (base + radius > 0.75){
    return (setOfPoints)
  }
  positiveCorner = (sign * base + sign * radius)
  negativeCorner = (sign * base)
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, positiveCorner, positiveCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(positiveCorner, negativeCorner, negativeCorner, negativeCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, negativeCorner, negativeCorner, positiveCorner))
  setOfPoints<-rbind(setOfPoints, drawConformalLine(negativeCorner, positiveCorner, positiveCorner, positiveCorner))
  recursiveSquares(setOfPoints, base + radius, radius/2, sign)
}

But it still is liable to bust the stack – indeed the whole system quite finely balanced. Make the initial squares too small and it is easy to break the stack – though the tail recursive form does seem a little more robust.

3. Getting a tessellation/exploiting the dihedral symmetry

Unfortunately the tendency of the stack to blow up makes it very difficult (as far as I can see anyway) to write some elegant code that will seamlessly deliver the tessellations sought. But some simple group theory can help us get some more patterns.

The pattern above shows rotational and reflective symmetry – in other words the shape has the symmetries of square (dihedral group D_4 ).

We use this in the code through the <code>sign</code> factor in the calls to <code>recursiveSquares</code> and then in the two lines:

t<-t + geom_point(aes(color="magenta", x=squarePoints$xp, y=squarePoints$yp))
t<-t + geom_point(aes(color="magenta", x = -squarePoints$xp, y=squarePoints$yp)) 

These effectively reflect earlier results.

Representing equal distances in the conformal representation of the hyperbolic plane


(This is mainly about posting a pretty picture.)

As I have previously remarked, the last couple of times I have tried to read Roger Penrose’s The Road to Reality I have stumbled over the discussion of non-Euclidean geometry in Chapter 2 (of a 34 chapter book).

Now I feel more confident about the maths but I am also taking more time and exploring the issues – after all I’ve had the book for more than a decade so there has been no rush so far.

In Chapter 2 Penrose presents the conformal (angles are preserved) representation of a hyperbolic plane through one of Escher’s woodcuts – my picture (generated with some R code) dispenses with Escher’s fish (see this blog post from Allyson Redhunt for a further discussion) and instead just looks at the way equal distances are seen in the projection.

In this representation the black line at the edge is at infinity, but all the other circles are equidistant – I’ve applied a coloring effect so as we approach the edge they don’t just look like a solid block.

The R code is shown below.

#!/usr/bin/env Rscript
library(ggplot2)
library(ggforce)
partitions<-1000
distances<-seq(from=1, to=partitions, by=1) 
distances<-exp(distances/partitions)/(2 * distances)
circles<-data.frame(x0=rep(0, partitions), y0=rep(0, partitions), r=distances)
t <- ggplot() + geom_circle(aes(x0=x0, y0=y0, r=1-r, color=rgb(r,1-r,1)), data=circles)
t <- t + geom_circle(aes(x0=0, y0=0, r=1), color="black", data=data.frame(x0=0, y0=0, r=1)) + coord_fixed()

In praise of Roger Penrose


Roger Penrose has been awarded a share in the Nobel Prize for physics and I could not be more pleased.

It is not that I have met him or even attended a lecture by him and nor do we even see him much on TV – but I owe him a debt for his insight and also for his ability to educate and inform.

A few years ago I bought his “Fashion, Faith and Fantasy” – his stern critique of what he sees as the fashion of string theory and assorted similar ideas. I’m not going to pretend it’s an easy book, and much of it was well over my head – but it is still choc-full of insight. From it I went back to an older copy of “Cycles of Time”. This is a shorter book and was much more heavily marketed as for the “lay reader” but, actually, I found much of it much harder to get to grips with – if you don’t really understand conformal geometry (and I didn’t) it’s pretty hard to follow. But, and this is a big but, it has an absolutely brilliant explanation of entropy in its opening chapters and if, like me, you never really got to grips with this subject (because, I maintain, it was so poorly explained) as an undergraduate, this really cuts through the fog.

It mattered to me because its discussion of phase spaces gave me an insight into the stochastic nature of memory management systems and the way in which entropy can help us explain latency in computer systems. I wrote quite a bit about that in my PhD thesis and I am convinced it’s a subject that would repay a lot more investigation. Sir Roger collected an additional citation for that (not that he needed it of course).

Only last night I again made an attempt on the massive “The Road to Reality” – previously I’ve never managed to get past the introductory discussion of non-Euclidean geometry in chapter two, but I feel a bit more confident about that now, so giving it another go.

Representing one terabit of information


I picked up my copy of Roger Penrose‘s Cycles of Time: An Extraordinary New View of the Universeand reread his opening chapter on the nature of entropy (if you struggle with this concept as a student then I recommend the book for this part alone – his exposition is brilliant).

My next thought was to think if I could model his comments about a layer of red paint sitting on top of blue paint – namely could I just write a program that showed how random molecular movements would eventually turn the whole thing purple. Seemed like a nice little programming project to while a way a few hours with – and the colours would be lovely too.

Following Penrose’s outline we would look at a ‘box’ (as modelled below) containing 1000 molecules. The box would only be purple if 499, 500 or 501 molecules were red (or blue), otherwise it would be shaded blue or red.

three dimensional grid

And we would have 1000 of these boxes along each axis – in other words 10^9 boxes and 10^{12} molecules – with 5 \times 10^8 boxes starting as blue (or red).

Then on each iteration we could move, say 10^6 molecules and see what happens.

But while the maths of this is simple, the storage problem is not – even if we just had a bit per molecule it is one terabit of data to store and page in and out on every iteration.

I cannot see how compression would help – initially the representation would be highly compressible as all data would be a long series of 1s followed by a long series of 0s. But as we drove through the iterations and the entropy increased that would break down – that is the whole point, after all.

I could go for a much smaller simulation but that misses demonstrating Penrose’s point – that even with the highly constrained categorisation of what constitutes purple, the mixture turns purple and stays that way.

So, algorithm gurus – tell me how to solve this one?

Update: Redrew the box to reflect the rules of geometry!

Entropy and randomness in communications, a quick introduction


Last year I remarked on how Roger Penrose‘s discussion of entropy in Cycles of Time did more to explain the physical nature of the concept (to me, at least) than many years of formal education.

And now, another book, Information and Coding Theory has given me some real insight into how computers deliver pseudo-randomness and why they look for sources of entropy.

The maths of all this are deceptively simple, but are of fundamental importance in information theory.

First of all, we can think of what sort of information events convey. There is a famous aphorism about the habits of bears, the direction from which the Sun rises and the title of the head of the Catholic Church. No newspaper or TV bulletin would ever bother to tell us, on a daily basis, that the Sun has indeed risen from the East. In effect that event conveys no information because it is completely expected. Toss a coin, though, and we do not know the outcome in advance – the result is of interest because the outcome is not known in advance.

Hence we can posit an information function I(e) which tells us how much information we can derive from event e , or, if we think of symbols in a data stream we can call this I(s_i) – the amount of information we can derive if the next symbol we see is s_i .

From our earlier discussion we can see that the result of this function is dependent on the probability of the event occurring or the symbol being received – i.e. if we knew the Sun was always going to rise in the East we can derive no information from that very thing happening. Hence:

I(s_i) is a function of  p_i where p_i is the probability that the next symbol to be seen is i . As we will only consider ‘memoryless’ events we will also insist that p_i is independent of time or any previously seen symbol. Hence:

I(s_is_j) (i.e. the information we can gain from seeing the symbol s_i followed by the symbol s_j is I(s_i) + I(s_j)

Taking these points together we can define I(s_i) = log\frac{1}{p_i} = -log(p_i)

We can see that I tends to infinity if the probability of an event tends to zero (think of the Sun rising in the West!) and tends to 0 if nothing unexpected happens.

The entropy of a system (communication) H(S) then becomes:

H(S) = \sum\limits_i^q p_iI(s_i) = \sum\limits_i^q p_ilog\frac{1}{p_i} = -\sum\limits_i^q p_i log(p_i)

Review of “The Black Cloud”


Cover of "The Black Cloud"

My interest in astronomy and astrophysics comes from childhood and when I was much, much younger I had an (unscientific) fondness for the “steady state” theory of cosmology, which, in the early 1970s was not as thoroughly discredited as it is today (though, of course, many newer cosmologies borrow from it or show similarities to it – for instance Roger Penrose‘s proposal in Cycles of Time).

Part of the attraction of the steady state cosmology was the figure of Fred Hoyle, or at least how I imagined him: blunt speaking, no nonsense, scientific genius. But until I read The Black Cloud I had not read any of his scientific or literary works.

The book is fascinating as a period piece and not a bad read as a piece of science fiction either – though the overall tone and dialogue reminds me of “Journey into Space” – a 1950s BBC science fiction radio serial recently re-broadcast.

But here is a novel with differential equations, computer program listings (presumably in machine code of some sort as it certainly is not a high level language) and a description of the (then) pioneering technology of pulse code modulation.Not all the science is good though, but Hoyle cannot be blamed for that: although it does not use the term, the view of artificial intelligence here is the conventional one of the time, but also one that fifty years of rapidly advancing computing power has failed, thus far at least, to sustain.

Mixed, with that, though is a fair dose of of Little Englandism, enormous doses of sexism and a quite frightening view into how Hoyle thinks society should be organised – namely with politicians, the people we chose, removed and the dictatorship of the scientists instituted. Stalin ruled in the name of science too.

Hoyle’s preface implies it would be a mistake to ascribe the views of Chris Kingsley, the chief advocate of crushing politics, to himself, but the character sounds far too much like him – the man who once exploded with anger when a snotty PhD student called Stephen Hawking pointed out a flaw in his calculations because he thought it would weaken his attempted blackmail of politicians – for the denial to be credible.

It’s a great book and an easy read, so I do recommend it.

It’s official: we’re getting bigger all the time says Nobel Committee


Prevailing model of the origin and expansion o...
Image via Wikipedia

The decision to award this year’s Nobel Prize for Physics to three astrophysicists who, through measuring the brightness of distant supernovae showed that the expansion of the universe is accelerating, is simply the highest possible confirmation of what most if not all in the field have accepted for a decade or more.

That idea is a decisive break from the cosmology I was taught in 1987 – then the argument was whether the geometry of space time was circular (ie., the big bang would be followed by a big crunch as like a ball thrown in the air,eventually  everything fell back to the starting point), parabolic – ie., the expansion and mass were in exact balance and that at an infinite time in the future the expansion would halt – or hyperbolic, with the expansion too fast for gravity to eventually halt or reverse it.

What has been honoured today does not directly affect these three choices – presumably the expansion could accelerate and then slow and then reverse – but the theory seems pretty clear – the acceleration and the “dark energy” (code for “we don’t know”) that is causing it means the space-time will not stop expanding.

So, does this mean our universe is a “one shot deal” – began with the Big Bang and extending for ever? Not necessarily. Designing experiments to deduce what might have existed before the Big Bang is obviously very difficult (how can you find out what happened before time began?) but there are physical (as opposed to meta-physical) theories and arguments about observational evidence.

Roger Penrose‘s Cycles of Time has one theory (though I’ve not got that far yet) and I have seen Neil Turok on TV outline another (an aside: I don’t now Professor Turok, but I used to know his parents who were leading anti-apartheid activists and members of my local branch of the Labour Party – small world, very big universe).

Evolution and the second law of thermodynamics


"The Blue Marble" is a famous photog...
Image via Wikipedia

When I wrote my first piece about Roger Penrose‘s Cycles of Time – one of the links offered to me was to a Christian fundamentalist blogger who claimed that the second law of thermodynamics showed evolution could not have taken place:

E. Evolution contradicts the Second law of Thermodynamics

In the Theory of Evolution it is proposed that “simple life” evolved into more complex life forms by Spontaneous Generation. Both Darwin’s Theory of Evolution and the Spontaneous Generation model both directly contravene the Law of Biogenesis.

The Second Law Of Thermodynamics

The Second Law of Thermodynamics states that the disorder in a system increases, rather than decreases. 

The problem for this argument’s advocates is that Penrose demolishes it in brilliant style. I won’t quote from him directly, but I will try to summarise his argument.

Penrose begins by actually restating the fundamentalist argument in a much wider sense – it is life itself that on a naive view would appear to violate the second law – after all our bodies do not melt away (so long as we live), but remain highly ordered and as we grow (eg our hair or nails) appear to create order out of disorder.

The key to this is the Sun. The first thing a secondary school science student is taught is that the Sun supplies the Earth with energy but, in fact, this is not true in the sense that the Sun does not provide a net increase in energy on Earth – if it did then our planet would continually heat up until it reached an equilibrium. The Earth re-radiates the Sun’s energy back into space at a equal rate to which it is recieved.

What the Sun is, though, is much hotter than surrounding space and so it sends the Earth a number of high energy (yellow light) photons. When the Earth re-radiates the Sun’s energy it does so at a lower temperature than the Sun – essentially at infra-red frequencies – so many more photons are radiated back into space than are received. More photons means a greater phase space and hence it means a higher entropy. So the Sun continually supplies the Earth with low entropy energy which processes on the Earth – including life – convert into high entropy energy.

For instance when we eat food we convert that low entropy food source (eg an egg) into high entropy heat energy. The food source itself ultimately derived its energy from the low entropy energy source that is the Sun, and so on.

Of course, all the time, the Sun’s own entropy is increasing, but we don’t need to worry about the consequences of that for a few billion more years.

It’s a brilliant, beautiful, argument though it is also one that is seldom, if ever, taught in schools.

The second law of thermodynamics and the history of the universe


Oxford Physicist Roger Penrose to Speak at Bro...
Image via Wikipedia

I had to go on quite a long plane journey yesterday and I bought a book to read – Roger Penrose‘s work on cosmology: Cycles of Time: An Extraordinary New View of the Universe

I bought it on spec – it was on the popular science shelves: somewhere I usually avoid at least for the physical sciences, as I know enough about them to make hand waving more annoying than illuminating, but it seemed to have some maths in it so I thought it might be worthwhile.

I have only managed the first 100 pages of it so far, so have not actually reached his new cosmology, but already feel it was worth every penny.

Sometimes you are aware of a concept for many years but never really understand it, until some book smashes down the door for you. “Cycles of Time” is just such a book when it comes to the second law of thermodynamics. At ‘A’ level and as an undergraduate we were just presented with Boltzmann’s constant and told it was about randomness. If anybody talked about configuration space or phase space in any meaningful sense it passed me by.

Penrose gives both a brilliant exposition of what entropy is all about in both intuitive and mathematical form but also squares the circle by saying that, at heart, there is an imprecision in the law. And his explanation of why the universe moves from low entropy to high entropy is also brilliantly simple but also (to me at least) mathematically sound: as the universe started with such a low entropy in the big bang a random walk process would see it move to higher entropy states (volumes of phase space).

There are some frustrating things about the book – but overall it seems great. I am sure I will be writing more about it here, if only to help clarify my own thoughts.

In the meantime I would seriously recommend it to any undergraduate left wondering what on earth entropy really is. In doing so I am also filled with regret at how I wasted so much time as an undergrad: university really is wasted on the young!

(On breakthrough books: A few years ago I had this experience with Diarmaid MacCulluch’s Reformation and protestantism. People may think that the conflict in the North of Ireland is about religion – but in reality neither ‘side’ really knows much about the religious views of ‘themuns’. That book ought to be compulsory reading in all Ireland’s schools – North and South. Though perhaps the Catholic hierarchy would have some issues with that!)