Astronomy was my first scientific love, sadly neglected since I got my astrophysics degree a quarter of a century ago – but I am seriously considering buying a telescope – especially as they are so cheap.

I think I can get a 5″ (I don’t think in metric in these cases) Maksutov for about £400 – around the cash price my parents paid to get me a 6″ Newtonian 35 years ago. I have not done the maths but surely that means a superior/more complex design is available for perhaps 1/5th or 1/6th of the price in the 1970s: why is that? Robots making the mirrors/lenses?

I want a Maksutov or similar because setting this thing up in London is not going to be much use except for looking at the Moon – need to take it away on holiday to make the best of it.

I have just had this problem and it took me a while to figure out how to fix it, when it really ought to be up there in lights, so here is my attempt to help anyone else having the same issue.

I just rebooted my remote server, which uses KVM to create a test bed Linux install, for the first time since I put the whole KVM kit-and-kaboodle on it. But when I tried to restart the KVM client I got an error message:

Error starting domain: internal error Network 'default' is not active.

Had a letter from Birkbeck today telling me I had passed all my exams – so the issue now is finishing the MSc project and so getting the degree (I suppose, in theory at least as I have yet to be formally awarded it, I could now claim to have reached post graduate diploma level).

Right now, on the project, I am testing small kernel patches to see if a localised page replacement policy makes any noticeable difference for systems that are, or are in danger of, thrashing (ie spending nearly all their time waiting for paging to and from disk as opposed to doing any real computing).

The first patch I tried, forcing the largest and a large (not necessarily second-largest) process to write any dirty pages to disk had no noticeable effect – having thought about it a bit more, I now realise is unlikely that file backed pages are going to be dirty in this way for many processes, so actually all this code is likely to have done little but degrade performance.

What I really need is an aggressive rifle through the biggest processes page stack, essentially accelerating the hands of the CLOCK (of the CLOCK page replacement algorithm) for this process (or, diminishing the handspread, to use a term often seen in this context) compared to those in general. So that’s next.

But that is somewhat beyond me! And there does not appear to be a wikipedia article on this, so you may have to wait for some time before you see me try to take it on.

Interestingly, Woodin’s wikipedia entry, accurately, suggests that his earlier work was sceptical as to the truth of Cantor’scontinuum hypothesis – something that his latest work turns on its head.

Gödel’s Incompleteness Theorems are one of the cornerstone’s of modern mathematical thought but it is also a major blot on the mathematical landscape – as it establishes an inherent limit on the ability of mathematicians to describe the mathematical world: the first theorem (often thought of as the theorem) states that no consistent (ie self-contained) axiomatic system is capable to describing all the facts about natural numbers.

To today’s physical scientists – used to concepts such as relativity and quantum uncertainty – the broad idea that there could be an uncertainty at the heart of mathematics is maybe not so difficult to take, but it is fair to say it broke a lot of mathematical hearts in the 1930s when first promulgated. (This book – Godel’s Proof – offers an excellent introduction for the non-mathematician who is mathematically competent – ie like me!).

Gödel thought at the time that this kink in mathematical reality could be smoothed out by a better understanding of infinities in mathematics – and, according to the cover article in this week’s New Scientist (seemingly only available online to subscribers), by Richard Elwes, it is now being claimed by Hugh Woodin of UC Berkeley that just that has been shown.

Along the way, this new hypothesis of “Ultimate L” also demonstrates that Cantor’s continuum hypothesis is correct. I do not claim to understand “Ultimate L”, and in any case, as is their style, the New Scientist don’t print the proof, they just describe it in layman’s terms. I do have a basic understanding of the continuum hypothesis, though, and so can show the essential points that “Ultimate L” claims to have found.

Georg Cantor showed that there were multiple infinities, the first of which, so-called (aleph null) is the infinity of the countable numbers – eg 1, 2, 3… and so on. Any infinite set that can be paired to a countable number in this way has a cardinality of . (And, as the New Scientist point out, this is the smallest infinity – eg if you thought that, say, there must be half as many even numbers as there are natural numbers, you are wrong – the set of both is of cardinality – 2 is element 1 of the set, 4, is element 2 and so on: ie a natural number can be assigned to every member of the set of even numbers and so the set is of cardinality .)

The continuum hypothesis concerns what might be – the next biggest infinity. Cantor’s hypothesis is that is the real numbers (the continuum): I discuss why this is infinite and a different infinity from the natural numbers here.

We can show that this set has cardinality – a number very much bigger than . But is there another infinity in between?

Mathematicians have concentrated on looking at whether any projections (a word familiar to me now from relational algebra) of the set of reals has a cardinality between and – if they did then it would be clear the reals could not have the cardinality of – but some other, higher, .

No projections with a different cardinality have been found, but that is not the same as a proof they do not exist. But if Woodin’s theory is correct then none exist.

(Just one more chance to plug the brilliant Annotated Turing: if you are interested in computer science you should really read it! This is the book that first got me interested in all this.)

The tactics employed by comment spammers continue to fascinate me, but I have noticed a new trend, which indicates either a new trend or adaption from the spammers – which reminds me of the way bacteria respond to antibiotics – or perhaps just indicates I am being over-sensitive.

Twice in the last 48 hours I have had comments from people who are clearly responding to the content of the blog post but who are also linking to an explicitly and solely commercial page: one was an XML editor and the other was a video on “how to be a hacker” (of the LulzSec variety as opposed to kernel patcher type).

Now, this blog is currently doing pretty well in Google on a number of the technical areas it discusses and traffic is slowly rising as a result. (Interestingly Google ranks the HTTPS pages much higher than their HTTP cousins, but that’s another issue and not one I am going to discuss here, because I really have no idea why that is.)

Therefore it may well be quite valuable to comment spam this site if you are looking for people interested in XSLT or the Kronecker delta or whatever. But you are also up against a pretty good spam filter in terms of Akismet, so the usual “your blog is great” crap is not going to make it.

So, like a bacterium faced with penicillin the spammers mutate and devote more energy to survival. Or, it is just that people who sell a product are genuinely interested in what I write here – though something tells me that it is more likely to be the first!

Not quite sure why as obviously the hostapd service comes up on boot and the /etc/network/interfaces file has that exact mode setting line. But the bridge comes up with the wireless card disabled and while the mode setting line will enable it, it requires the hostapd restart to work.

Anyway, the fact I know what to do means it is easy to restore the network to full capacity, but it would be nice if it worked off-the-bat.

I have already pointed out how German language books dominate in LaTeX and now I have found another area where German-language books suggest that the Germans are well-ahead of English speakers in their use of FOSS.

I can find three books on kernel virtualisation (KVM) in German:

(Strangely, though, none of these seems to be presently available.) In English there is only a technical report for Fedora 12 –Fedora 12 Virtualization Guide – though, admittedly, it would appear to be in print!

The main use of MD5 – at least if my computer is any guide – is to check that a file you have downloaded from the internet or elsewhere is what it says it is.

In fact in this general use MD5 is not being used to encrypt anything – instead it produces a “message digest” – a 128 bit number that is a hash function of the supplied file. The problem with collisions in this case is that it means two different files could give the same hashed value (ie MD5 digest) and you could be left thinking you had the genuine file when you did not.

But that 128 bit hashed value plainly is not going to give you back the file – unlike CSI:Miami and everywhere where you see a “let’s enhance that” computer graphics gimmick, in the real world you cannot get more information out than you put in: so a 128 bit number will not magically transform into a 5 MB file even if you can reverse the hashing.

But that was not the issue with the Sun – they appeared to be using MD5 to hash a short password and in that case, at least in theory, being able to crack MD5 could give the original information back.

My last posting – made in a hurry while I was waiting for a large SCP transfer to complete – has generated more traffic than anything else in the last month: possibly because it was mildly topical and largely because it was retweeted by John Rentoul, one of the UK’s leading political commentators and all-round good egg.

Maybe I was being a bit naive with it – because I took what the New Scientist said the US Department of Homeland Security said about the MD5hashing algorithm – in short, it is completely broken and should not be used – and LulzSec’s claim to have cracked the Sun’s MD5 based password system and drew what I thought was the obvious conclusion – that an MD5 crack was in some way related to LulzSec’s attack on the Sun’s website on last Monday night.

But at least one person who ought to know more about this than me – forensic investigator Jonathan Krause – has taken issue with it and indeed with the whole idea that MD5 is a major security risk:

I have to admit I find this all a bit puzzling, as the web is full of stories like “brute force algorithm can crack 1.5 million MD5 hashes per second” and so on, as well as even some sites that allow you to look up previously brute forced hashes. (Of course 1.5 million per second is not a lot in a key space of .)

Yet on the other hand I can also find no concrete example (the disputed LulzSec crack at the Sun excepted) where someone is claiming to have made a practical use of an MD5 crack.