Our lousy past

Male human head louse, Pediculus humanus capitis. keywords : louse lice Anoplura Phthiraptera pou poux kopflaus hoofdluis piojo parasite blood sucking ectoparasite Belgium Technical settings : – focus stack of 57 images – microscope objective (Nikon achromatic 10x 160/0.25) directly on the body (with adapter ~30 mm) (Photo credit: Wikipedia)

I do not write about biology-related issues here much: official participation ended with a B grade at ‘O’ level in 1982, but a New Scientist article on the evolutionary history of the (various) human lice (which does not yet appear to be online) is just too fascinating to ignore.

Primate lice are different from most species of wingless insects of the order Phthiraptera in that they suck blood: most lice just live on dead skin and similar detritus. Not all primates are infected either – Orang Utangs and Gibbons do not suffer. But the human head louse shares a common ancestor with the louse of the Chimp – just as we and Chimps share common ancestors.

But it turns out there is more than one species of human head louse and it is likely that the rarer forms – found in two groups, the first in the Americas and Asia and the second only in Nepal and Ethiopia – are descended from the lice of other (now extinct) hominids. The most common form of head louse can be dated back to about 6 million years ago, but the less common forms appear to have only established themselves with homo sapiens about 0.5 million years ago.

Then there are the pubic lice – commonly known as crabs – which, as the name suggests, live on pubic hair. These are not descended from head lice but actually, it appears, from the lice of the gorilla and crossed to humans about 3 million years ago. This leaves open the prospect that human had sex with gorillas or (perhaps more likely as it still happens today) ate gorilla meat.

Head and pubic lice are a public health menace but in general pose no serious threat. Not so the clothes louse. Typhus – the disease carried by these – killed millions in Europe in the 20th century (particularly in times of war) and is still killing around tens of thousands of people across the world every year. But it would appear the clothes louse is merely a mutated form of the head louse.

In experiments head lice transferred to clothes die in massive numbers but a few have a genetic disposition to survive and will then reproduce in massive numbers. It is this overwhelming number that may make them deadly, rather than any other particular characteristic. The genetics suggest that humans began to wear clothes (as we became less hairy and gained new skills and tools) perhaps 170,000 years ago.

My scalp feels quite itchy now. So I’ll stop.

Eta Carinae: humanity’s death sentence?

Drawing of a massive star collapsing to form a black hole. Energy released as jets along the axis of rotation forms a gamma ray burst that lasts from a few milliseconds to minutes. Such an event within several thousand light years of Earth could disrupt the biosphere by wiping out half of the ozone layer, creating nitrogen dioxide and potentially cause a mass extinction. (Photo credit: Wikipedia)

Probably not, thankfully. But this super massive star system, some 7,500 light years from earth (i.e., very roughly about 500 million times further away from us than the Sun) could really be some sort of threat.

New discoveries in astronomy suggest we could find out quite soon – any day now (and for next few thousand years) – just how dangerous it is. Indeed, for reasons I discuss below, sooner might be better than later.

At the core of Eta Carinae is a very massive star, perhaps the biggest in our galaxy, at about 30 solar masses. Stars of this size burn their basic nuclear fuel so quickly that they cannot generate enough internal pressure for very long to stave off gravity. A time comes when they start to collapse under their own weight – a process which, like a stone in free fall, accelerates. But as it does, it also drives up the star’s core temperature to ever higher values burning up elements in a fusion ‘reactor’ and indeed every element on Earth heavier than iron was generated in this way – before eventually triggering a supernova.

Such a supernova would see the star shed mass and emit more radiation than the rest of the galaxy combined. But even that would not stop the star’s collapse, which would continue at an ever accelerating rate and lead to an emission of the most deadly form of radiation known – a gamma ray burst - as the remnant heads towards becoming a black hole.

If such a burst hit the Earth then the consequences could be absolutely devastating – damaging our atmosphere as well as potentially exposing any one on the side of the planet facing the burst to huge quantities of ionising radiation (how much we do not know, as this has not happened, at least in human timescales).

Gamma ray bursts are believed to be emitted in the direction of the polar axes of rotation of the collapsing star and so if Eta Carinae were to blow tonight (or rather this night, 7,500 years ago) we would almost certainly be okay, given that we do not think those currently point anywhere near us.

The exploding star would, though, turn night into day for perhaps a few weeks or months. But the unknown factor is how the Eta Carinae system might change over time:  it is a binary system and the energy released as the star’s collapse began could upset the apple cart.

So why bring all this up now? Well, as reported in this week’s New Scientist, astronomers have confirmed that a star in a distant (67 million light years away) galaxy which recently was seen to go supernova exploded just three years after a so-called “supernova imposter” event when it shed a small but significant proportion of its mass. This is just the second time such a phenomenon has been observed by professional astronomers.

Eta Carinae was seen to flare up not two years ago but in the 1840s. Perhaps we are now 170 years overdue on the biggest fireworks display ever seen?

Not all astronomers agree. Some suggest Eta Carinae still has some many thousands of years to go before it starts to run out of fuel and so begins its final collapse. The truth is, we just do not know.

Unreal Tournament at the forefront of AI research (really)

I am not much of a computer games player, but I do have a fondness for Unreal Tournament- a network shoot-em-up game at which I have always been hopeless if

Human brain – midsagittal cut (Photo credit: Wikipedia)

enthusiastic (though I’ve not played for a few years now).

So I was pleasantly surprised to read that Unreal is now, according to the New Scientist, at the forefront of artificial intelligence research, (subscribers only at present).

Next week Unreal bots will battle human players at the IEEE Conference of Computational Intelligence and Games in Grenada, Spain and if a bot can convince human players it is real then its developers could win $7000. In past years the bots have only won a maximum of$2000 – the money that goes to the best bot that is not convincing as a human.

This year, though, hopes seem high that one bot – ‘Neurobot’ – has a real crack at the $7000 prize (it came second to ICE-CIG amongst the bots last year but Neurobot’s developers, from Imperial College in London, are hoping that improvements they have made put it in poll.) The interesting thing is that Neurobot is the algorithm/concept being used – the bot doesn’t try to use computational power to fully absorb the scene and act on every piece of information, but instead discriminates using the principles of “global workplace theory” (GWT) which states that the human brain only pushes a small number of things into the forefront of thought – the “global workplace”. Neurobot models the brain’s GWT with about 20,000 simulated neurons as opposed to the estimated 120 billion in the human brain. Neurobot’s prospects for success might then suggest that the barrier to successful AI has not really been the inability of computers to match the computational power of the human brain, but the failure, thus far at least, for human AI researchers to model how the brain works. In other words – we are not really as clever as we like to think (a thought which dominated much of the latter work of Alan Turing – as much discussed in Alan Turing: The Enigma (which I am still listening to – though I have got down to the final three hours of thirty). Burn baby burn: human spontaneous combustion explained Human spontaneous combustion is sometimes classed alongside water divination – a myth that is strongly held. Yet, this week’s New Scientist (currently only available to subscribers), gives what looks like (to my unqualified eye) a good explanation by Brian J. Fordbased on Remember: Spontaneous Human Combustion is a real threat. (Confucius) (Photo credit: Sim Dawdler) his recent paper in The Microscope. The extract for that says: Last November, a 42-year-old man was standing outside a record store in Sweden, apparently waiting for someone. Suddenly fire appeared from his clothing and he burst into flames. He blazed from within and formed into a fireball as he fell to the ground. The man, who remains anonymous, narrowly escaped with his life. It was an astonishing and ghoulish episode but it wasn’t the first. There have been a number of reports of people catching fire, and most of them are almost completely destroyed in the conflagration. In the space of minutes, people have been consumed by fire, and all that remains is a heap of ash from which the legs protrude. It is a horrifying spectacle, which has been written about for centuries. And not only is Ford convinced of the scientific validity of the idea of spontaneous human combustion – he’s also convinced it’s nothing to do with the standard explanation – that heavy drinkers and alcoholics burn after they have pickled their flesh in alcohol. He soaked flesh in alcohol and showed that it would not burn. He also rejects the ‘wick’ theory: that human clothing acts like a candle wick with liquified human fat. Instead his explanation is that acetone – a highly flammable chemical which is produced in ketosis when the body’s cells are starved of food (excessive dieting, alcoholism, diabetes, over doing it in the gym or teething can all cause this) is the cause of spontaneous combustion. When he burnt pork flesh marinated in acetone – made up to model clothed humans – burning with the characteristic pattern of human spontaneous combustion was seen: ‘a pile of smoking cinders with protruding limbs’. People with ketosis may already be seriously ill and the risk of spontaneous combustion is low: Ford estimates about 120 cases being recorded in all human history. But if you want lower the risk then stop smoking (yet another reason to do that!) and avoid wearing synthetic fibres on dry days. Even if P=NP we might see no benefit A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution. (Photo credit: Wikipedia) Inspired by an article in the New Scientist I am returning to a favourite subject – whether P = NP and what the implications would be in the (unlikely) case that this were so. Here’s a crude but quick explanation of P and NP: P problems are those that can be solve in a known time based on a polynomial (hence P) of the problem’s complexity – ie., we know in advance how to solve the problem. NP (N standing for non-deterministic) problems are those for which we can quickly (ie in P) verify that a solution is correct but for which we don’t have an algorithmic solution to hand – in other words we have to try all the possible algorithmic solutions in the hope of hitting the right one. Reversing one-way functions (used to encrypt internet commerce) is an NP problem – hence, it is thought/hoped that internet commerce is secure. On the other hand drawing up a school timetable is also an NP problem so solving that would be a bonus. There are a set of problems, known as NP-complete, which if any one was shown to be, in reality a P problem would mean that P = NP – in other words there would be no NP problems as such (we are ignoring NP-hard problems). If it was shown we lived in a world where P=NP then we would inhabit ‘algorithmica’ – a land where computers could solve complex problems with, it is said, relative ease. But what if, actually, we have polynomial solutions to P class problems but there were too complex to be of much use? The New Scientist article – which examines the theoretical problems faced by users of the ‘simplex algorithm’ points to just such a case. The simplex algorithm aims to optimise a multiple variable problem using linear programming – as in an example they suggest, how do you get bananas from 5 distribution centres with varying numbers of supplies to 200 shops with varying levels of demand – a 1000 dimensional problem. The simplex algorithm involves seeking the optimal vertex in the geometrical representation of this problem. This was thought to be rendered as a problem in P via the ‘Hirsch conjecture‘ – that the maximum number of edges we must traverse to get between any two corners on a polyhedron is never greater than the number of faces of the polyhedron minus the number of dimensions in the problem. While this is true in the three dimensional world a paper presented in 2010 and published last month in the Annals of MathematicsA counterexample to the Hirsch Conjecture by Francisco Santos has knocked down its universal applicability. Santos found a 43 dimensional shape with 86 faces. If the Hirsch conjecture was valid then the maximum distance between two corners would be 43 steps, but he found a pair at least 44 steps apart. That leaves another limit – devised by Gil Kalai of the Hebrew University of Jerusalem and Daniel Kleitman of MIT, but this, says the New Scientist is “too big, in fact, to guarantee a reasonable running time for the simplex method“. Their two page paper can be read here. They suggest the diameter (maximal number of steps) is $n^{log(d+2)}$ where $n$ is the number of faces and $d$ the dimensions. (The Hirsch conjecture is instead $n-d$.) So for Santos’s shape we would have a maximal diameter of $\approx 10488$ (this is the upper limit, rather than the actual diameter). A much bigger figure even for a small dimensional problem, the paper also refers to a linear programming method that would require, in this case, a maximum of $n^{4\sqrt d}\approx 10^{50}$ steps. Not a practical proposition if the dimension count starts to rise. (NB I am not suggesting these are the real limits for Santos’s shape, I am merely using the figures as an illustration of the many orders of magnitude difference they suggest might apply). I think these figures suggest that proving P = NP might not be enough even if it were possible. We might have algorithms in P, but the time required would be such that quicker, if somewhat less accurate, approximations (as often used today) would still be preferred. Caveat: Some/much of the above is outside my maths comfort zone, so if you spot an error shout it out. More than a game: the Game of Life English: Diagram from the Game of Life (Photo credit: Wikipedia) Conway’s Game of Life has long fascinated me. Thirty years ago I wrote some Z80 machine code to run it on a Sinclair ZX80 and when I wrote BINSIC, my reimplentation of Sinclair ZX81 BASIC, Life was the obvious choice for a demonstration piece of BASIC (and I had to rewrite it from scratch when I discovered that the version in Basic Computer Games was banjaxed). But Life is much more than a game – it continues to be the foundation of ongoing research into computability and geometry - as the linked article in the New Scientist reports. For me, it’s just fun though. When I wrote my first version of it back in 1981 I merely used the rubric in Basic Computer Games – there was no description of gliders or any of the other fascinating patterns that the game throws up – so in a sense I “discovered” them independently, with all the excitement that implies: it is certainly possible to spend hours typing in patterns to see what results they produce and to keep coming back for more. • “Life.bas” should run on any system that will support the Java SDK – for instance it will run on a Raspberry Pi – follow the instructions on the BINSIC page. A more up to date version may be available in the Github repository at any given time (for instance, at the time of writing, the version in Git supports graphics plotting, the version in the JAR file on the server only supports text plotting). On the other hand, at any given time the version in Git may not work at all: thems the breaks. If you need assistance then just comment here or email me adrianmcmenamin at gmail. Smell you later (but I won’t vote for you) There is a fascinating article in New Scientist this week on the science of disgust. Garbage. … …? (Photo credit: lowellbellew) Candidates in America have already been using paper impregnated with the smell of rotting garbage and covered in pictures of their opponents. Coming to Britain soon? FatFonts – coming to an infographic near you? Infographics are familiar to most heavy users of the internet and in my professional life I have recommended clients make more use of them to convey complex arguments and statistics to the wider world. Now, reports New Scientist (the article seems currently only to be available to subscribers) info graphics could be given extra impact through “FatFont” – a numeric font developed at the University of St Andrews in Fife and the University of Calgary. In this the weight (inked area) of the font is proportional to the size of the number – hence 2 has twice as much inked area as 1 (within the same overall area for the numbers – see the photograph of Sicily and Mount Etna being mapped with FatFonts). The advantage, say the developers is that it allows both broad information – Mount Etna looks darker because it is higher – and precise data (here to a precision of 1 – 99) to be combined (overall the system works for range 1 – 999). So it could be ideal for both those scanning a graphic out of casual interest and those looking for reusable and accurate data. FontFonts are now to be tested with users and compared to alternative methods such as heat maps. Not one for @johnrentoul – but could 1 in 6 of us be dead this time next year? Image via Wikipedia It sounds like an outrageous idea, fit only for conspiracy theorists and the otherwised unhinged, but, at the risk of attracting the interest of John Rentoul’s “Question To Which The Answer is No” (QTWTAIN) list or sounding like I want a job on the Daily Express, the serious point is this: it is not impossible, if not yet likely, that as many as one in six of the planet’s human population could be felled inside a year by (a mutated) H5N1 flu virus. I have just read a really very scary article by Debora MacKenzie to this effect in the current edition of the New Scientist: Two research teams have found that a handful of mutations allow H5N1 to spread like ordinary flu while staying just as deadly, at least in ferrets. Given that ordinary flu can infect a third of humanity in a season and that half the people who catch H5N1 die, the implications are not hard to fathom. Sounds like the gravest of public health emergencies to me, and indeed the point of the article is not to scare people but to insist that governments and scientists get their act together in tackling the emergency. So far the way in which this recent work has reached limited public consciousness has been over the issue of censorship of the experimental results – the US authorities in particular are worried that describing the genetic modifications required to H5N1 to allow it to spread widely (currently it is widespread in birds but very rarely transmitted to humans and will not transmit from human to human at all) will be used by terrorists to make a bio weapon. It is not difficult to understand the fear, given the basic maths. “Big data” suggests online poker “relatively benign” Image via Wikipedia This week’s New Scientist reports (currently only available to subscribers) that the “Big Data” revolution has now encompassed online poker – with data collected on four million online players between September 2009 and March 2010. I have never played online poker – I do like the game but I am hopeless at it and losing money brings me no thrill! But all the same the results are fascinating to me and I think they are also important in public policy terms. The UK relatively recently liberalised its laws on gambling – but in the face of much controversy and moral panic the legislation was never implemented in full. Horror stories of gambling addiction abounded. But is addiction a big problem? Not for most players it seems. Kahlil Philander (what a great name!) from the University of Las Vegas (not the person who collected the data – that was Ingo Fielder at the University of Hamburg) says “online poker is a relatively benign activity for 95 to 99 per cent of users”. The only 1 – 5% are a mixture of “pathological gamblers” and professionals. Are there other policy areas where we would let the issues facing perhaps less than 1% of the population block what is benign for the others? I am not convinced. The US, despite draconian laws on online gambling and what looks like attempts to enforce the law extra-territorially, provided 23.7% of all players and next came Germany (where it also supposedly illegal but there is no real enforcement) with 9.6%, followed by (fully legal) gamblers in France (7.4%), Russia (6.7%), Canada (5.7%) and the UK (4.5%). Half of online players played for less than a hour a month, while 6% played for more than 100 hours. And about 94% of players pay poker sites less than$500 in a six month period (in fact about a quarter of players pay the sites less than a dollar in six months and more than half pay in around \$2.40 a month or less).

So, the games are not a threat to most people – but are they a realistic way of making money? The answer very clearly is no, and what money professionals make is down to very hard work, according to a sidebar on the main article.

Essentially it is much easier to lose money in poker than it is to win it, and it is also the case that for most of us luck (or rather random processes) and not skill will dominate our rate of loss or return. Typical (losing) players have to play 1560 hands (I am sure that is more than I will manage in a lifetime) before skill predominates over luck and for professionals the number rockets to 35,450.