# Online translation a new way to learn a language fast?

This week’s New Scientist reports (online link below- it’s a short piece in the physical edition on p. 19) that Duolingo – a free online service designed to help people learn a new language by translating web content is working very well.

To probe the site’s effectiveness, Roumen Vesselinov at the City University of New York used standard tests of language ability… he found that students needed an average of 34 hours to learn the equivalent of … the first semester of a university Spanish course.

I have just been over to Duolingo’s site myself – refreshing some French – and it is certainly easy to use. The site’s blog shows that this project has some strong values and has set itself some big targets – it looks well worth exploring.

# Hiding in plain silence via Skype

Skype 1.0 running on an Android 2.2 device (Photo credit: Wikipedia)

This week’s New Scientist reports that Polish computer scientist Wojciech Mazurczyk and his colleagues have found a way to use silence in Skype calls to encrypt data.

Silence in Skype is signified by 70 bit packets instead of 130 bit packets that carry speech. Skype Hide allows users to inject encrypted data into those 70 bits.

An eavesdropper listening to the call would therefore hear nothing.

Of course that wouldn’t stop somebody delving into the packets and rooting out the encrypted data – whether they could decrypt that is another matter.

In the end Skype probably cannot be trusted for secure communications because it’s algorithms are proprietary – we simply do not know in detail how it works and whether anybody is cracking it.

Having worked with opposition politicians who use Skype to evade state intrusion, this lack-of-trust-by-design has always bothered me: but the it is hard to explain one-way functions to most people anyway.

Skype Hide is due to be publicly demo’ed in June at a steganography conference in Montpellier.

# Pykrete revisted

pykrete meets hammer (Photo credit: Genista)

The current issue of New Scientist has a short but interesting piece about pykrete – the material, made of ice and saw dust, once proposed as the basis for aircraft carrier production during the Battle of the Atlantic – a conflict at its very peak 70 years ago.

In essence, while Britain, America and the Soviet Union between them could, by the end of 1942 deploy superior forces to the Nazis and deliver hammer blows – such as that seen at Stalingrad and in a smaller, but still strategically vital, way in the Western Desert, Britain was in severe danger of running out of food and fuel because of losses to the U-Boats in the Atlantic.

The battle was fought in science and engineering as much as in bullets, bombs and torpedoes. Radar (or RDF as the British called it) and Sonar (ASDIC was the British name) were not invented during the conflict but they were improved and perfected as a direct result (the cavity magnetron – now found in almost every western home in a microwave oven – was an essential innovation invented in 1940 and deployed to devastating effect in US and British planes for centimetric radar in the battle). And, of course, the greatest secret of all – the British/Polish cracking of the Enigma machine – was also central (the British got back “in” to the German navy enigma in December 1942).

Pykrete was part of this scientific battle – based around the idea of Geoffrey Pyke, the archetypal dotty scientist (and according to Wikipedia first cousin of Magnus Pyke, so amiable eccentricity  was plainly a family characteristic) . I first read of pykrete in Giles Fodden’s Turbulence – and to be honest the New Scientist article doesn’t take me much beyond the novel except to confirm some of the more bizarre episodes in the book (such as Mountbatten’s HQ being in cellars underneath Smithfield meat market) and the rather odd vignette of Canadian archivists claiming to know nothing of detailed plans they once bandied about 20 years ago (does someone fear Al-Q’ida or the North Koreans are building a pykrete boat?).

The New Scientist piece does suggest, though, that some of the wilder hopes for pykrete were misconceived, but in truth we still don’t know if it could or would be viable. By late 1942 the crack in Engima, combined with longer range aircraft, faster cargo ships, centimetric radar (which allowed much finer resolution and so made it easier to pick out U-boats on the surface)  and Leigh Lights meant that the balance of forces was shifting dramatically against the Kriegsmarine and the question of whether pykrete could have worked was rendered moot.

• Anyone interested in the role of science in the Second World War would be well advised to see if they could pick up a copy of Brian Johnson’s Secret War: now 35 years old – and an accompaniment to the BBC series of the same name (which for the first time revealed the truth of “Station X” and the Enigma crack) – it is a tale of genius and daring-do and the good guys win in the end.

# Some questions about the science of magic chocolate

(Photo credit: Wikipedia)

I have to be careful here, as it’s not unknown for bloggers to be sued in the English courts for the things they write about science. So I will begin by saying I am not, and have no intention of, casting aspersions on the integrity of any of the authors of the paper I am about to discuss. Indeed, my main aim is to ask a few questions.

The paper is “Effects of Intentionally Enhanced Chocolate on Mood“, published in 2007 in issue 5 of volume 3 of “Explore: The Journal of Science and Healing” by Dean Radin and Gail Hayssen, both of the Institute of Noetic Sciences in California, and James Walsh of Hawaiian Vintage Chocolate.

The reason it came to my attention today is because it was mentioned in the “Feedback” diary column of the current issue of the New Scientist:

the authors insist that in “future efforts to replicate this finding… persons holding explicitly negative expectations should not be allowed to participate for the same reason that dirty test tubes are not allowed in biology experiments”. [Correspondent] asks whether this may be “the most comprehensive pre-emptive strike ever” against any attempt to replicate the results.

But I want to ask a few questions about the findings of the report which are, in summary, that casting a spell over chocolate makes it a more effective mood improver.

In their introduction to the paper the authors state:

Cumulatively, the empirical evidence supports the plausibility that MMI [mind-matter interaction] phenomena do exist.

Unfortunately, the source quoted for this is a book -Entangled Minds - so I cannot check if this is based on peer reviewed science. But you can read this review (as well as those on Amazon) – and make your own mind up.

Again, not doubting their sincerity, I do have to question their understanding of physics when they state:

Similarities between ancient beliefs about contact magic and the modern phenomenon of quantum entanglement raise the possibility that, like other ethnohistorical medical therapies once dismissed as superstition – eg, the use of leeches and maggots in medicine – some practices such as blessing food may reflect more than magical thinking or an expression of gratitude.

The study measured the mood of the eaters of chocolate over a week. Three groups ate chocolate “blessed” in various ways and one ate unblessed chocolate.

The first thing that is not clear (at least to me) is the size of each group. The experiment is described as having been designed for 60 participants, but then states that 75 signed informed consents before reporting that 62 “completed all phases of the study”. Does that mean that 13 dropped out during it? As readers of Bad Pharma will know it is an error to simply ignore drop outs (if they are there – as I say it is not clear.)

The researchers base their conclusion that -

This experiment supports the ethnohistorical lore suggesting that the act of blessing food, with good intentions, may go beyond mere superstitious ritual – it may also have measurable consequences

- substantially on the changes in mood on one day – day 5 of the 7.

The researchers say that the p-value for their finding on that day is 0.0001 – ie there is a 1 in 10000 chance this is the result of chance alone.

I have to say I just not convinced (not by their statistics which I am sure are sound) but by the argument. Too small a sample, too short a period, too many variables being measured (ie days, different groups), a lack of clarity about participation and so on. But I would really appreciate it if someone who had a stronger background in statistics than me had a look.

# Our lousy past

Male human head louse, Pediculus humanus capitis. keywords : louse lice Anoplura Phthiraptera pou poux kopflaus hoofdluis piojo parasite blood sucking ectoparasite Belgium Technical settings : – focus stack of 57 images – microscope objective (Nikon achromatic 10x 160/0.25) directly on the body (with adapter ~30 mm) (Photo credit: Wikipedia)

I do not write about biology-related issues here much: official participation ended with a B grade at ‘O’ level in 1982, but a New Scientist article on the evolutionary history of the (various) human lice (which does not yet appear to be online) is just too fascinating to ignore.

Primate lice are different from most species of wingless insects of the order Phthiraptera in that they suck blood: most lice just live on dead skin and similar detritus. Not all primates are infected either – Orang Utangs and Gibbons do not suffer. But the human head louse shares a common ancestor with the louse of the Chimp – just as we and Chimps share common ancestors.

But it turns out there is more than one species of human head louse and it is likely that the rarer forms – found in two groups, the first in the Americas and Asia and the second only in Nepal and Ethiopia – are descended from the lice of other (now extinct) hominids. The most common form of head louse can be dated back to about 6 million years ago, but the less common forms appear to have only established themselves with homo sapiens about 0.5 million years ago.

Then there are the pubic lice – commonly known as crabs – which, as the name suggests, live on pubic hair. These are not descended from head lice but actually, it appears, from the lice of the gorilla and crossed to humans about 3 million years ago. This leaves open the prospect that human had sex with gorillas or (perhaps more likely as it still happens today) ate gorilla meat.

Head and pubic lice are a public health menace but in general pose no serious threat. Not so the clothes louse. Typhus – the disease carried by these – killed millions in Europe in the 20th century (particularly in times of war) and is still killing around tens of thousands of people across the world every year. But it would appear the clothes louse is merely a mutated form of the head louse.

In experiments head lice transferred to clothes die in massive numbers but a few have a genetic disposition to survive and will then reproduce in massive numbers. It is this overwhelming number that may make them deadly, rather than any other particular characteristic. The genetics suggest that humans began to wear clothes (as we became less hairy and gained new skills and tools) perhaps 170,000 years ago.

My scalp feels quite itchy now. So I’ll stop.

# Eta Carinae: humanity’s death sentence?

Drawing of a massive star collapsing to form a black hole. Energy released as jets along the axis of rotation forms a gamma ray burst that lasts from a few milliseconds to minutes. Such an event within several thousand light years of Earth could disrupt the biosphere by wiping out half of the ozone layer, creating nitrogen dioxide and potentially cause a mass extinction. (Photo credit: Wikipedia)

Probably not, thankfully. But this super massive star system, some 7,500 light years from earth (i.e., very roughly about 500 million times further away from us than the Sun) could really be some sort of threat.

New discoveries in astronomy suggest we could find out quite soon – any day now (and for next few thousand years) – just how dangerous it is. Indeed, for reasons I discuss below, sooner might be better than later.

At the core of Eta Carinae is a very massive star, perhaps the biggest in our galaxy, at about 30 solar masses. Stars of this size burn their basic nuclear fuel so quickly that they cannot generate enough internal pressure for very long to stave off gravity. A time comes when they start to collapse under their own weight – a process which, like a stone in free fall, accelerates. But as it does, it also drives up the star’s core temperature to ever higher values burning up elements in a fusion ‘reactor’ and indeed every element on Earth heavier than iron was generated in this way – before eventually triggering a supernova.

Such a supernova would see the star shed mass and emit more radiation than the rest of the galaxy combined. But even that would not stop the star’s collapse, which would continue at an ever accelerating rate and lead to an emission of the most deadly form of radiation known – a gamma ray burst - as the remnant heads towards becoming a black hole.

If such a burst hit the Earth then the consequences could be absolutely devastating – damaging our atmosphere as well as potentially exposing any one on the side of the planet facing the burst to huge quantities of ionising radiation (how much we do not know, as this has not happened, at least in human timescales).

Gamma ray bursts are believed to be emitted in the direction of the polar axes of rotation of the collapsing star and so if Eta Carinae were to blow tonight (or rather this night, 7,500 years ago) we would almost certainly be okay, given that we do not think those currently point anywhere near us.

The exploding star would, though, turn night into day for perhaps a few weeks or months. But the unknown factor is how the Eta Carinae system might change over time:  it is a binary system and the energy released as the star’s collapse began could upset the apple cart.

So why bring all this up now? Well, as reported in this week’s New Scientist, astronomers have confirmed that a star in a distant (67 million light years away) galaxy which recently was seen to go supernova exploded just three years after a so-called “supernova imposter” event when it shed a small but significant proportion of its mass. This is just the second time such a phenomenon has been observed by professional astronomers.

Eta Carinae was seen to flare up not two years ago but in the 1840s. Perhaps we are now 170 years overdue on the biggest fireworks display ever seen?

Not all astronomers agree. Some suggest Eta Carinae still has some many thousands of years to go before it starts to run out of fuel and so begins its final collapse. The truth is, we just do not know.

# Unreal Tournament at the forefront of AI research (really)

I am not much of a computer games player, but I do have a fondness for Unreal Tournament- a network shoot-em-up game at which I have always been hopeless if

Human brain – midsagittal cut (Photo credit: Wikipedia)

enthusiastic (though I’ve not played for a few years now).

So I was pleasantly surprised to read that Unreal is now, according to the New Scientist, at the forefront of artificial intelligence research, (subscribers only at present).

Next week Unreal bots will battle human players at the IEEE Conference of Computational Intelligence and Games in Grenada, Spain and if a bot can convince human players it is real then its developers could win $7000. In past years the bots have only won a maximum of$2000 – the money that goes to the best bot that is not convincing as a human.

This year, though, hopes seem high that one bot – ‘Neurobot’ – has a real crack at the \$7000 prize (it came second to ICE-CIG amongst the bots last year but Neurobot’s developers, from Imperial College in London, are hoping that improvements they have made put it in poll.)

The interesting thing is that Neurobot is the algorithm/concept being used – the bot doesn’t try to use computational power to fully absorb the scene and act on every piece of information, but instead discriminates using the principles of “global workplace theory” (GWT) which states that the human brain only pushes a small number of things into the forefront of thought – the “global workplace”.

Neurobot models the brain’s GWT with about 20,000 simulated neurons as opposed to the estimated 120 billion in the human brain.

Neurobot’s prospects for success might then suggest that the barrier to  successful AI has not really been the inability of computers to match the computational power of the human brain, but the failure, thus far at least, for human AI researchers to model how the brain works. In other words – we are not really as clever as we like to think (a thought which dominated much of the latter work of Alan Turing – as much discussed in Alan Turing: The Enigma (which I am still listening to – though I have got down to the final three hours of thirty).

# Burn baby burn: human spontaneous combustion explained

Human spontaneous combustion is sometimes classed alongside water divination – a myth that is strongly held.

Yet, this week’s New Scientist (currently only available to subscribers), gives what looks like (to my unqualified eye) a good explanation by Brian J. Fordbased on

Remember: Spontaneous Human Combustion is a real threat. (Confucius) (Photo credit: Sim Dawdler)

his recent paper in The Microscope.

The extract for that says:

Last November, a 42-year-old man was standing outside a record store in Sweden, apparently waiting for someone. Suddenly fire appeared from his clothing and he burst into flames. He blazed from within and formed into a fireball as he fell to the ground. The man, who remains anonymous, narrowly escaped with his life. It was an astonishing and ghoulish episode but it wasn’t the first. There have been a number of reports of people catching fire, and most of them are almost completely destroyed in the conflagration. In the space of minutes, people have been consumed by fire, and all that remains is a heap of ash from which the legs protrude. It is a horrifying spectacle, which has been written about for centuries.

And not only is Ford convinced of the scientific validity of the idea of spontaneous human combustion – he’s also convinced it’s nothing to do with the standard explanation – that heavy drinkers and alcoholics burn after they have pickled their flesh in alcohol. He soaked flesh in alcohol and showed that it would not burn.

He also rejects the ‘wick’ theory: that human clothing acts like a candle wick with liquified human fat.

Instead his explanation is that acetone – a highly flammable chemical which is produced in ketosis when the body’s cells are starved of food (excessive dieting, alcoholism, diabetes, over doing it in the gym or teething can all cause this) is the cause of spontaneous combustion.

When he burnt pork flesh marinated in acetone – made up to model clothed humans – burning with the characteristic pattern of human spontaneous combustion was seen: ‘a pile of smoking cinders with protruding limbs’.

People with ketosis may already be seriously ill and the risk of spontaneous combustion is low: Ford estimates about 120 cases being recorded in all human history. But if you want lower the risk then stop smoking (yet another reason to do that!) and avoid wearing synthetic fibres on dry days.

# Even if P=NP we might see no benefit

A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution. (Photo credit: Wikipedia)

Inspired by an article in the New Scientist I am returning to a favourite subject – whether P = NP and what the implications would be in the (unlikely) case that this were so.

Here’s a crude but quick explanation of P and NP: P problems are those that can be solve in a known time based on a polynomial (hence P) of the problem’s complexity – ie., we know in advance how to solve the problem. NP (N standing for non-deterministic) problems are those for which we can quickly (ie in P) verify that a solution is correct but for which we don’t have an algorithmic solution to hand – in other words we have to try all the possible algorithmic solutions in the hope of hitting the right one. Reversing one-way functions (used to encrypt internet commerce) is an NP problem – hence, it is thought/hoped that internet commerce is secure. On the other hand drawing up a school timetable is also an NP problem so solving that would be a bonus. There are a set of problems, known as NP-complete, which if any one was shown to be, in reality a P problem would mean that P = NP – in other words there would be no NP problems as such (we are ignoring NP-hard problems).

If it was shown we lived in a world where P=NP then we would inhabit ‘algorithmica’ – a land where computers could solve complex problems with, it is said, relative ease.

But what if, actually, we have polynomial solutions to P class problems but there were too complex to be of much use? The New Scientist article – which examines the theoretical problems faced by users of the ‘simplex algorithm’ points to just such a case.

The simplex algorithm aims to optimise a multiple variable problem using linear programming – as in an example they suggest, how do you get bananas from 5 distribution centres with varying numbers of supplies to 200 shops with varying levels of demand – a 1000 dimensional problem.

The simplex algorithm involves seeking the optimal vertex in the geometrical representation of this problem. This was thought to be rendered as a problem in P via the ‘Hirsch conjecture‘ – that the maximum number of edges we must traverse to get between any two corners on a polyhedron is never greater than the number of faces of the polyhedron minus the number of dimensions in the problem.

While this is true in the three dimensional world a paper presented in 2010 and published last month in the Annals of MathematicsA counterexample to the Hirsch Conjecture by Francisco Santos has knocked down its universal applicability. Santos found a 43 dimensional shape with 86 faces. If the Hirsch conjecture was valid then the maximum distance between two corners would be 43 steps, but he found a pair at least 44 steps apart.

That leaves another limit – devised by Gil Kalai of the Hebrew University of Jerusalem and Daniel Kleitman of MIT, but this, says the New Scientist is “too big, in fact, to guarantee a reasonable running time for the simplex method“. Their two page paper can be read here. They suggest the diameter (maximal number of steps) is $n^{log(d+2)}$ where $n$ is the number of faces and $d$ the dimensions. (The Hirsch conjecture is instead $n-d$.)

So for Santos’s shape we would have a maximal diameter of $\approx 10488$ (this is the upper limit, rather than the actual diameter). A much bigger figure even for a small dimensional problem, the paper also refers to a linear programming method that would require, in this case, a maximum of $n^{4\sqrt d}\approx 10^{50}$ steps. Not a practical proposition if the dimension count starts to rise. (NB I am not suggesting these are the real limits for Santos’s shape, I am merely using the figures as an illustration of the many orders of magnitude difference they suggest might apply).

I think these figures suggest that proving P = NP might not be enough even if it were possible. We might have algorithms in P, but the time required would be such that quicker, if somewhat less accurate, approximations (as often used today) would still be preferred.

Caveat: Some/much of the above is outside my maths comfort zone, so if you spot an error shout it out.

# More than a game: the Game of Life

English: Diagram from the Game of Life (Photo credit: Wikipedia)

Conway’s Game of Life has long fascinated me. Thirty years ago I wrote some Z80 machine code to run it on a Sinclair ZX80 and when I wrote BINSIC, my reimplentation of Sinclair ZX81 BASIC, Life was the obvious choice for a demonstration piece of BASIC (and I had to rewrite it from scratch when I discovered that the version in Basic Computer Games was banjaxed).

But Life is much more than a game – it continues to be the foundation of ongoing research into computability and geometry - as the linked article in the New Scientist reports.

For me, it’s just fun though. When I wrote my first version of it back in 1981 I merely used the rubric in Basic Computer Games – there was no description of gliders or any of the other fascinating patterns that the game throws up – so in a sense I “discovered” them independently, with all the excitement that implies: it is certainly possible to spend hours typing in patterns to see what results they produce and to keep coming back for more.

• “Life.bas” should run on any system that will support the Java SDK – for instance it will run on a Raspberry Pi – follow the instructions on the BINSIC page. A more up to date version may be available in the Github repository at any given time (for instance, at the time of writing, the version in Git supports graphics plotting, the version in the JAR file on the server only supports text plotting). On the other hand, at any given time the version in Git may not work at all: thems the breaks. If you need assistance then just comment here or email me adrianmcmenamin at gmail.