Tag Archives: New Scientist

Patenting reality


(I was about to post something about this when I noticed the Stephen Fry nomination of Turing’s Universal Machine as a great British “innovation” and decided to write about that first … but the two dovetail as I hope you can see.)

Patent
Patent (Photo credit: brunosan)

I was alerted to this by an article in the latest edition of the New Scientist (subscription link) -on whether scientific discoveries should be patentable. The New Scientist piece by Stephen Ornes argues strongly and persuasively that the maths at the heart of software should be protected from patents. But having now read the original article Ornes is replying to, I think he has missed the full and horrific scale of what is being proposed by David Edwards, a retired associate professor of maths for the University of Georgia at Athens.

Of course I am not suggesting that Edwards himself is evil, but his proposal certainly is: because he writes, in the current issue of the  Notices of the American Mathematical Society (“Platonism is the Law of the Land”) that not just mathematical discoveries should be patentable but, in fact, all scientific discoveries should be: indeed he explicitly cites general relativity as an idea that could have been covered by a patent.

Edwards is direct in stating his aim:

Up until recently, the economic consequences of these restrictions in intellectual property rights have probably been quite slight. Similarly, the economic consequences of allowing patents for new inventions were also probably quite slight up to about 1800. Until then, patents were mainly import franchises. After 1800 the economic consequences of allowing patents for new inventions became immense as our society moved from a predominately agricultural stage into a predominately industrial stage. Since the end of World War II,our society has been moving into an information stage, and it is becoming more and more important to have property rights appropriate to this stage. We believe that this would best be accomplished by Congress amending the patent laws to allow anything not previously known to man to be patented.

Part of me almost wants this idea to be enacted, because like the failure of prohibition of alcohol it would teach an unforgettable lesson. But as someone who cares about science and the good that science could do for humanity it is deeply chilling.
For instance, it is generally accepted that there is some flaw in our theories of gravity (general relativity) and quantum mechanics in that they do not sit happily beside one another. Making them work together is a great task for physicists. And if we do it – if we find some new theory that links these two children of the 20th century – perhaps it will be as technologically important as it will be scientifically significant (after all, quantum mechanics gave us the transistor and general relativity the global positioning system). But if that theory was locked inside some sort of corporate prison for twenty or twenty-five years it could be that the technological breakthroughs would be delayed just as long.

Another reason why exercise keeps you younger?


Reading through a copy of the New Scientist from a few weeks back (2 February edition), I was struck by the comment in an article on the effects of sleep on the human body by Nancy Wesensten, a psychologist at the Walter Reed Army Institute of Research in Maryland:

Sleeping deteriorates like everything else does as you age… People have more difficulty falling asleep, and that could account for the cognitive decline we see in normal ageing.

Until I started a vigorous exercise regime about 16 months ago, I really did find it difficult to fall asleep. Since then, while I don’t have my partner’s ability to more or less doze off as soon as my head hits the pillow, I generally no longer have a problem.

I have often seen claims made for exercise as a means of maintaining mental acuity – perhaps there is some substance to those claims and this is the reason?

Why I would not want to fly in a Dreamliner (yet)


A Faraday cage in operation: the woman inside ...
A Faraday cage in operation: the woman inside is protected from the electric arc by the cage. Photograph taken at the Palais de la Découverte (Discovery Palace). (Photo credit: Wikipedia)

The world’s Dreamliners are currently grounded while regulators and the manufacturer aim to sort out problems with the plane’s batteries – which supply a heavy duty electrical system that replace the more traditional (and heavier) hydraulic controls in other planes.

I imagine, and hope, that the battery problems can be sorted out – though the Lithium Ion system chosen is notorious for overheating and fire risk – or “unexpected rapid oxidisation” as an earlier (non-aviation) LiOn battery fire problem was called.

But what worries me about the planes is a different issue – their outer shell is made of plastic, again considerably lighter than traditional aircraft materials, but lacking the quality of a Faraday Cage.

The Faraday Cage effect is what makes traditional airliners (and motor cars) safe from lightning strikes – lightening represents a terrific concentration of energy but, actually, very little charge – and so when lightning strikes a sheet of metal, like a car or an airliner, the charge is spread and the strike rendered safe (in contrast poor conductors like human flesh burn up, which is what makes us so vulnerable).

Now, the Dreamliner has a metal substructure which is designed to replicate the effect of a Faraday Cage but, having read a critical piece on this in the current edition of the New Scientist, I am not convinced it has been tested enough to be reliable. Anyone who has flown through the heart of an electrical storm – as I did a few years ago coming out of Tbilisi – will understand just how essential it is that the Dreamliner’s electrical properties are fully reliable.

Update: I am a hopeless speller and, as was pointed out to me I mis-spelled ‘lightning’ throughout this the first time round. Apologies.

Online translation a new way to learn a language fast?


Flags of Spain and MexicoThis week’s New Scientist reports (online link below- it’s a short piece in the physical edition on p. 19) that Duolingo – a free online service designed to help people learn a new language by translating web content is working very well.

To probe the site’s effectiveness, Roumen Vesselinov at the City University of New York used standard tests of language ability… he found that students needed an average of 34 hours to learn the equivalent of … the first semester of a university Spanish course.

I have just been over to Duolingo’s site myself – refreshing some French – and it is certainly easy to use. The site’s blog shows that this project has some strong values and has set itself some big targets – it looks well worth exploring.

Hiding in plain silence via Skype


Skype 1.0 running on an Android 2.2 device
Skype 1.0 running on an Android 2.2 device (Photo credit: Wikipedia)

This week’s New Scientist reports that Polish computer scientist Wojciech Mazurczyk and his colleagues have found a way to use silence in Skype calls to encrypt data.

Silence in Skype is signified by 70 bit packets instead of 130 bit packets that carry speech. Skype Hide allows users to inject encrypted data into those 70 bits.

An eavesdropper listening to the call would therefore hear nothing.

Of course that wouldn’t stop somebody delving into the packets and rooting out the encrypted data – whether they could decrypt that is another matter.

In the end Skype probably cannot be trusted for secure communications because it’s algorithms are proprietary – we simply do not know in detail how it works and whether anybody is cracking it.

Having worked with opposition politicians who use Skype to evade state intrusion, this lack-of-trust-by-design has always bothered me: but the it is hard to explain one-way functions to most people anyway.

Skype Hide is due to be publicly demo’ed in June at a steganography conference in Montpellier.

Pykrete revisted


pykrete meets hammer
pykrete meets hammer (Photo credit: Genista)

The current issue of New Scientist has a short but interesting piece about pykrete – the material, made of ice and saw dust, once proposed as the basis for aircraft carrier production during the Battle of the Atlantic – a conflict at its very peak 70 years ago.

In essence, while Britain, America and the Soviet Union between them could, by the end of 1942 deploy superior forces to the Nazis and deliver hammer blows – such as that seen at Stalingrad and in a smaller, but still strategically vital, way in the Western Desert, Britain was in severe danger of running out of food and fuel because of losses to the U-Boats in the Atlantic.

The battle was fought in science and engineering as much as in bullets, bombs and torpedoes. Radar (or RDF as the British called it) and Sonar (ASDIC was the British name) were not invented during the conflict but they were improved and perfected as a direct result (the cavity magnetron – now found in almost every western home in a microwave oven – was an essential innovation invented in 1940 and deployed to devastating effect in US and British planes for centimetric radar in the battle). And, of course, the greatest secret of all – the British/Polish cracking of the Enigma machine – was also central (the British got back “in” to the German navy enigma in December 1942).

Pykrete was part of this scientific battle – based around the idea of Geoffrey Pyke, the archetypal dotty scientist (and according to Wikipedia first cousin of Magnus Pyke, so amiable eccentricity  was plainly a family characteristic) . I first read of pykrete in Giles Fodden’s Turbulence – and to be honest the New Scientist article doesn’t take me much beyond the novel except to confirm some of the more bizarre episodes in the book (such as Mountbatten’s HQ being in cellars underneath Smithfield meat market) and the rather odd vignette of Canadian archivists claiming to know nothing of detailed plans they once bandied about 20 years ago (does someone fear Al-Q’ida or the North Koreans are building a pykrete boat?).

The New Scientist piece does suggest, though, that some of the wilder hopes for pykrete were misconceived, but in truth we still don’t know if it could or would be viable. By late 1942 the crack in Engima, combined with longer range aircraft, faster cargo ships, centimetric radar (which allowed much finer resolution and so made it easier to pick out U-boats on the surface)  and Leigh Lights meant that the balance of forces was shifting dramatically against the Kriegsmarine and the question of whether pykrete could have worked was rendered moot.

  • Anyone interested in the role of science in the Second World War would be well advised to see if they could pick up a copy of Brian Johnson’s Secret War: now 35 years old – and an accompaniment to the BBC series of the same name (which for the first time revealed the truth of “Station X” and the Enigma crack) – it is a tale of genius and daring-do and the good guys win in the end.

Some questions about the science of magic chocolate


This image was selected as a picture of the we...
 (Photo credit: Wikipedia)

I have to be careful here, as it’s not unknown for bloggers to be sued in the English courts for the things they write about science. So I will begin by saying I am not, and have no intention of, casting aspersions on the integrity of any of the authors of the paper I am about to discuss. Indeed, my main aim is to ask a few questions.

The paper is “Effects of Intentionally Enhanced Chocolate on Mood“, published in 2007 in issue 5 of volume 3 of “Explore: The Journal of Science and Healing” by Dean Radin and Gail Hayssen, both of the Institute of Noetic Sciences in California, and James Walsh of Hawaiian Vintage Chocolate.

The reason it came to my attention today is because it was mentioned in the “Feedback” diary column of the current issue of the New Scientist:

the authors insist that in “future efforts to replicate this finding… persons holding explicitly negative expectations should not be allowed to participate for the same reason that dirty test tubes are not allowed in biology experiments”. [Correspondent] asks whether this may be “the most comprehensive pre-emptive strike ever” against any attempt to replicate the results.

But I want to ask a few questions about the findings of the report which are, in summary, that casting a spell over chocolate makes it a more effective mood improver.

In their introduction to the paper the authors state:

Cumulatively, the empirical evidence supports the plausibility that MMI [mind-matter interaction] phenomena do exist.

Unfortunately, the source quoted for this is a book -Entangled Minds – so I cannot check if this is based on peer reviewed science. But you can read this review (as well as those on Amazon) – and make your own mind up.

Again, not doubting their sincerity, I do have to question their understanding of physics when they state:

Similarities between ancient beliefs about contact magic and the modern phenomenon of quantum entanglement raise the possibility that, like other ethnohistorical medical therapies once dismissed as superstition – eg, the use of leeches and maggots in medicine – some practices such as blessing food may reflect more than magical thinking or an expression of gratitude.

The study measured the mood of the eaters of chocolate over a week. Three groups ate chocolate “blessed” in various ways and one ate unblessed chocolate.

The first thing that is not clear (at least to me) is the size of each group. The experiment is described as having been designed for 60 participants, but then states that 75 signed informed consents before reporting that 62 “completed all phases of the study”. Does that mean that 13 dropped out during it? As readers of Bad Pharma will know it is an error to simply ignore drop outs (if they are there – as I say it is not clear.)

The researchers base their conclusion that -

This experiment supports the ethnohistorical lore suggesting that the act of blessing food, with good intentions, may go beyond mere superstitious ritual – it may also have measurable consequences

- substantially on the changes in mood on one day – day 5 of the 7.

The researchers say that the p-value for their finding on that day is 0.0001 – ie there is a 1 in 10000 chance this is the result of chance alone.

I have to say I just not convinced (not by their statistics which I am sure are sound) but by the argument. Too small a sample, too short a period, too many variables being measured (ie days, different groups), a lack of clarity about participation and so on. But I would really appreciate it if someone who had a stronger background in statistics than me had a look.

Our lousy past


Male human head louse, Pediculus humanus capit...
Male human head louse, Pediculus humanus capitis. keywords : louse lice Anoplura Phthiraptera pou poux kopflaus hoofdluis piojo parasite blood sucking ectoparasite Belgium Technical settings : – focus stack of 57 images – microscope objective (Nikon achromatic 10x 160/0.25) directly on the body (with adapter ~30 mm) (Photo credit: Wikipedia)

I do not write about biology-related issues here much: official participation ended with a B grade at ‘O’ level in 1982, but a New Scientist article on the evolutionary history of the (various) human lice (which does not yet appear to be online) is just too fascinating to ignore.

Primate lice are different from most species of wingless insects of the order Phthiraptera in that they suck blood: most lice just live on dead skin and similar detritus. Not all primates are infected either – Orang Utangs and Gibbons do not suffer. But the human head louse shares a common ancestor with the louse of the Chimp – just as we and Chimps share common ancestors.

But it turns out there is more than one species of human head louse and it is likely that the rarer forms – found in two groups, the first in the Americas and Asia and the second only in Nepal and Ethiopia – are descended from the lice of other (now extinct) hominids. The most common form of head louse can be dated back to about 6 million years ago, but the less common forms appear to have only established themselves with homo sapiens about 0.5 million years ago.

Then there are the pubic lice – commonly known as crabs – which, as the name suggests, live on pubic hair. These are not descended from head lice but actually, it appears, from the lice of the gorilla and crossed to humans about 3 million years ago. This leaves open the prospect that human had sex with gorillas or (perhaps more likely as it still happens today) ate gorilla meat.

Head and pubic lice are a public health menace but in general pose no serious threat. Not so the clothes louse. Typhus – the disease carried by these – killed millions in Europe in the 20th century (particularly in times of war) and is still killing around tens of thousands of people across the world every year. But it would appear the clothes louse is merely a mutated form of the head louse.

In experiments head lice transferred to clothes die in massive numbers but a few have a genetic disposition to survive and will then reproduce in massive numbers. It is this overwhelming number that may make them deadly, rather than any other particular characteristic. The genetics suggest that humans began to wear clothes (as we became less hairy and gained new skills and tools) perhaps 170,000 years ago.

My scalp feels quite itchy now. So I’ll stop.

Eta Carinae: humanity’s death sentence?


Drawing of a massive star collapsing to form a...
Drawing of a massive star collapsing to form a black hole. Energy released as jets along the axis of rotation forms a gamma ray burst that lasts from a few milliseconds to minutes. Such an event within several thousand light years of Earth could disrupt the biosphere by wiping out half of the ozone layer, creating nitrogen dioxide and potentially cause a mass extinction. (Photo credit: Wikipedia)

Probably not, thankfully. But this super massive star system, some 7,500 light years from earth (i.e., very roughly about 500 million times further away from us than the Sun) could really be some sort of threat.

New discoveries in astronomy suggest we could find out quite soon – any day now (and for next few thousand years) – just how dangerous it is. Indeed, for reasons I discuss below, sooner might be better than later.

At the core of Eta Carinae is a very massive star, perhaps the biggest in our galaxy, at about 30 solar masses. Stars of this size burn their basic nuclear fuel so quickly that they cannot generate enough internal pressure for very long to stave off gravity. A time comes when they start to collapse under their own weight – a process which, like a stone in free fall, accelerates. But as it does, it also drives up the star’s core temperature to ever higher values burning up elements in a fusion ‘reactor’ and indeed every element on Earth heavier than iron was generated in this way – before eventually triggering a supernova.

Such a supernova would see the star shed mass and emit more radiation than the rest of the galaxy combined. But even that would not stop the star’s collapse, which would continue at an ever accelerating rate and lead to an emission of the most deadly form of radiation known – a gamma ray burst – as the remnant heads towards becoming a black hole.

If such a burst hit the Earth then the consequences could be absolutely devastating – damaging our atmosphere as well as potentially exposing any one on the side of the planet facing the burst to huge quantities of ionising radiation (how much we do not know, as this has not happened, at least in human timescales).

Gamma ray bursts are believed to be emitted in the direction of the polar axes of rotation of the collapsing star and so if Eta Carinae were to blow tonight (or rather this night, 7,500 years ago) we would almost certainly be okay, given that we do not think those currently point anywhere near us.

The exploding star would, though, turn night into day for perhaps a few weeks or months. But the unknown factor is how the Eta Carinae system might change over time:  it is a binary system and the energy released as the star’s collapse began could upset the apple cart.

So why bring all this up now? Well, as reported in this week’s New Scientist, astronomers have confirmed that a star in a distant (67 million light years away) galaxy which recently was seen to go supernova exploded just three years after a so-called “supernova imposter” event when it shed a small but significant proportion of its mass. This is just the second time such a phenomenon has been observed by professional astronomers.

Eta Carinae was seen to flare up not two years ago but in the 1840s. Perhaps we are now 170 years overdue on the biggest fireworks display ever seen?

Not all astronomers agree. Some suggest Eta Carinae still has some many thousands of years to go before it starts to run out of fuel and so begins its final collapse. The truth is, we just do not know.

Unreal Tournament at the forefront of AI research (really)


I am not much of a computer games player, but I do have a fondness for Unreal Tournament- a network shoot-em-up game at which I have always been hopeless if

Human brain - midsagittal cut
Human brain – midsagittal cut (Photo credit: Wikipedia)

enthusiastic (though I’ve not played for a few years now).

So I was pleasantly surprised to read that Unreal is now, according to the New Scientist, at the forefront of artificial intelligence research, (subscribers only at present).

Next week Unreal bots will battle human players at the IEEE Conference of Computational Intelligence and Games in Grenada, Spain and if a bot can convince human players it is real then its developers could win $7000. In past years the bots have only won a maximum of $2000 – the money that goes to the best bot that is not convincing as a human.

This year, though, hopes seem high that one bot – ‘Neurobot’ – has a real crack at the $7000 prize (it came second to ICE-CIG amongst the bots last year but Neurobot’s developers, from Imperial College in London, are hoping that improvements they have made put it in poll.)

The interesting thing is that Neurobot is the algorithm/concept being used – the bot doesn’t try to use computational power to fully absorb the scene and act on every piece of information, but instead discriminates using the principles of “global workplace theory” (GWT) which states that the human brain only pushes a small number of things into the forefront of thought – the “global workplace”.

Neurobot models the brain’s GWT with about 20,000 simulated neurons as opposed to the estimated 120 billion in the human brain.

Neurobot’s prospects for success might then suggest that the barrier to  successful AI has not really been the inability of computers to match the computational power of the human brain, but the failure, thus far at least, for human AI researchers to model how the brain works. In other words – we are not really as clever as we like to think (a thought which dominated much of the latter work of Alan Turing – as much discussed in Alan Turing: The Enigma (which I am still listening to – though I have got down to the final three hours of thirty).