Crazy ideas you have in the bath


You know how it is … you go for a run and then lying in the bath you read a New Scientist article about Dark Energy and you think of two crazy ideas which you hope some respectable scientist will at least have stuck a paper on arXiv on, so you can at least say “I thought of that in the bath and it might even be right…”

Except that you can find no such papers, so you are reduced to looking like the crackpot you are by posing them here:

 

  • Inertial mass is caused by the gravitational field of a certain amount of matter that has been trapped in collapsed dimensions. Those dimensions are always the same distance away from any given point, so inertial mass is the same anywhere in the universe.
  • Dark energy is caused by the ‘evaporation’ via Hawking radiation or similar of our universe (sadly I am not the first to have thought of this particular piece of crack-pottery, so I won’t be collecting a Nobel prize for it). Further searching reveals there is even an arXiv paper on such an idea after all.
Advertisements

Russia Today (@RT_com) broadcasts fiction, not news


Last week’s New Scientist reports that Russia Today – the Kremlin’s propaganda channel subsidised to broadcast lies in support of the Russian Federation’s hostility to any country in Russia’s “near abroad” that dares to travel down the path of democracy and the rule of law – went one further when it started churning out stories about how the Zika outbreak was a result of a failed science experiment.

The basis of their report was “the British dystopian TV series Utopia“. Yes, they broadcast fiction as news and for once it was not a question of interpretation.

Here’s the product description from Amazon:

The Utopia Experiments is a legendary graphic novel shrouded in mystery. But when a small group of previously unconnected people find themselves in possession of an original manuscript, their lives suddenly and brutally implode.

Targeted swiftly and relentlessly by a murderous organisation known as The Network, the terrified gang are left with only one option if they want to survive: they have to run. But just as they think their ordeal is over, their fragile normality comes crashing down once again.

The Network, far from being finished, are setting their destructive plans into motion. The gang now face a race against time, to prevent global annihilation.

Islamism is bad for your health – and not just in the obvious ways


Asura demonstration in freedom square, Tehran,...
Asura demonstration in freedom square, Tehran, during 1979 Iranian revolution (Photo credit: Wikipedia)

Thanks to the New Scientist I have discovered that Islamic fundamentalism can have more damaging effects than just its attack on science, reason, liberty and equality: it can also damage your health.

Evidence from Iran, where the 1979 revolution led to both men and women adopting far more conservative modes of dress, is that the incidence of multiple sclerosis also began to increase – in fact ,according to this paper in the British Medical Journal, the incidence of MS increased eightfold between 1989 and 2006.

Scientists think the most likely reason is that the skin of Iranians was much less exposed to the Sun and consequently vitamin D production (as the New Scientist notes technically “vitamin D” produced in this way is not a vitamin at all, but that’s a different story) fell. The evidence that vitamin D production is closely linked to a variety of autoimmune diseases, including MS, is also growing.

It seems Chomsky was right (and what it might mean?)


Noam Chomsky
Noam Chomsky (Photo credit: Duncan Rawlinson. Duncan.co)

(Before any of my “political” friends think I have obviously suffered a serious blow to the head, I am talking about his theories on grammar and not his idiotic politics…)

In the late 1950s Noam Chomsky proposed that we have a natural capacity to process grammar and thus to use language – in essence that our brain is hard-wired to use language.

It was, and is, a controversial theory (though Chomsky would not agree), but this week new evidence has been published to support it – and, as outlined in the New Scientist, you can even conduct a thought experiment on yourself to test it.

Writing in the Proceedings of the National Academy of Sciences (the US National Academy that is), Jennifer Culbertson and David Adger consider whether language learners pick up language patterns by observing statistical patterns from existing speakers/users or – as the Chomskian theory would suggest – apply some form of “hard wired” rule to process grammar.

To do this they presented subjects (English speakers) with a “new” limited language based on common words in English. The subjects were then asked to judge whether a new phrase in this “language” – made by combining elements of the limited language they had already seen – would be correct in one of two forms. If they picked one form then they would likely be using some form of statistical inference – picking a form that looked closest to the forms they had already seen – if they picked another they were likely using an internal grammar machine in their brains.

And this is where you can test yourself … (shamelsssly nicked from the New Scientist as this example does not appear to be in the article itself):

Here are two phrases in the new language:

  • shoes blue
  • shoes two

So which of the following phrases is correct in this language:

  • shoes two blue
  • shoes blue two

If, as I did, you picked “shoes blue two” and not “shoes two blue” then you are favouring a semantic hierarchy and not a frequency based approach – in English two usually precedes blue, but blue is a stronger modifier of the noun than two.

In fact people chose the semantic hierarchy about 75% of the time – strongly suggesting that we do have a internal grammar engine running inside our heads.

(Chomsky himself appears to be dismissive of the study, despite it appearing confirm his work – “like adding a toothpick to mountain”. Tells you quite a lot about him, I think.)

What are the practical implications? I think it points to a limit to the effectiveness of things like big data based machine translation, if all that relies on is statistical inference. Inside a decade big data has made machine translation much more practical than the previous 50 years of AI research, but the quest for a way to compute grammar is still going to matter.

Enhanced by Zemanta

Why we’ll never meet aliens


First page from the manuscript explaining the ...
First page from the manuscript explaining the general theory of relativity (Photo credit: Wikipedia)

Well, the answer is pretty plain: Einstein‘s theory of general relativity – which even in the last month has added to it’s already impressive list of predictive successes – tells us that to travel at the speed of light a massive body  would require an infinite amount of propulsive energy. In other words, things are too far away and travel too slow for us to ever hope to meet aliens.

But what if – and it’s a very big if – we could communicate with them, instantaneously? GR tells us massive bodies cannot travel fast, or rather along a null time line – which is what really matters if you want to be alive when you arrive at your destination – but information has no mass as such.

Intriguingly, an article in the current edition of the New Scientist looks at ways in which quantum entanglement could be used to pass information – instantaneously – across any distance at all. Quantum entanglement is one of the stranger things we can see and measure today – Einstein dismissed it as “spooky interaction at a distance” – and essentially means that we can take two similar paired particles and by measuring the state of one can instantaneously see the other part of the pair fall into a particular state (e.g., if the paired particles are electrons and we measure one’s quantum spin, the other instantly is seen to have the other spin – no matter how far away it is at the time).

Entanglement does not allow us to transmit information though, because of what the cosmologist Antony Valentini calls, in an analogy with thermodynamic “heat death”, the “quantum death” of the universe – in essence, he says that in the instants following the Big Bang physical particles dropped into a state in which – say – all electron spins were completely evenly distributed, meaning that we cannot find electrons with which to send information – just random noise.

But – he also suggests – inflation – the super-rapid expansion of the very early universe may also have left us with a very small proportion of particles that escaped “quantum death” – just as inflation meant that the universe is not completely smooth because it pushed things apart at such a rate that random quantum fluctuations were left as a permanent imprint.

If we could find such particles we could use them to send messages across the universe at infinite speed.

Perhaps we are already surrounded by such “messages”: those who theorise about intelligent life elsewhere in the universe are puzzled that we have not yet detected any signs of it, despite now knowing that planets are extremely common. That might suggest either intelligent life is very rare, or very short-lived or that – by looking at the electromagnetic spectrum – we are simply barking up the wrong tree.

Before we get too excited I have to add a few caveats:

  • While Valentini is a serious and credible scientist and has published papers which show, he says, the predictive power of his theory (NB he’s not the one speculating about alien communication – that’s just me) – such as the observed characteristics of the cosmic microwave background (an “echo” of the big bang) – his views are far from the scientific consensus.
  • To test the theories we would have to either be incredibly lucky or detect the decay products of a particle – the gravitino – we have little evidence for beyond a pleasing theoretical symmetry between what we know about “standard” particle physics and theories of quantum gravity.
  • Even if we did detect and capture such particles they alone would not allow us to escape the confines of general relativity – as they are massive and so while they could allow two parties to theoretically communicate instantly, the parties themselves would still be confined by GR’s spacetime – communicating with aliens would require us and them in someway to use such particles that were already out there, and perhaps have been whizzing about since the big bang itself.

But we can dream!

Update; You may want to read Andy Lutomirski’s comment which, I think it’s fair to say, is a one paragraph statement of the consensus physics. I am not qualified to say he’s wrong and I’m not trying to – merely looking at an interesting theory. And I have tracked down Anthony Valentini’s 2001 paper on this too.

Enhanced by Zemanta

Schooling, heritability and IQ


In recent recent weeks, in the UK, there has been renewed interest in the question of heritability and educational performance, after Dominic Cummings, the outgoing advisor to Michael Gove, the education secretary, claimed that some sort of left wing conspiracy in the educational establishment – “the blob” as Cummings calls it – were resisting the facts of science over the issue.

Tory house journal The Spectator joined in the debate, publishing a piece by psychology lecturer Kathryn Asbury which talks of a “genetically sensitive school”. I don’t know about you but that sounds like nothing good to me.

So it is a pleasure to read the counter blast by Steven Rose, professor emeritus of biology at the Open University, in this week’s New Scientist.

To quote just two paragraphs of Rose’s article…

Psychometricians have by and large settled on a figure of 50 per cent for heritability based on what is now seen as a simplistic calculation that variance in a given environment for a trait – such as IQ – equals the sum of genetic and environmental contributions, plus a small component for the interaction of these two inputs. Robert Plomin, Gove’s behavioural genetics advisor and a prominent spokesman for this long psychometric tradition, puts it higher, at around 70 per cent, the figure cited by Cummings.

However, the calculation is almost meaningless. It depends on there being a uniform environment – fine if you are studying crop or milk yields, where you can control the environment and for which the measure was originally derived, but pretty useless when human environments vary so much. Thus some studies give a heritability estimate of 70 per cent for children in middle class families, but less than 10 per cent for those from poor families, where the environment is presumably less stable. And it is a changing environment, rather than changing genes, which must account for the increase in average IQ scores across the developed world by 15 points over the past century, to the puzzlement of the determinists.

Read more novels and you’ll be a better person


Crime and Punishment
Crime and Punishment (Photo credit: Wikipedia)

As a part-time PhD student with a full-time job, choosing what to read often feels like a moral dilemma as much as anything else. That book on MPI Programming? On the Irish War of Independence and Civil War, or one of the many novels I have bought and not got round to reading. Each carries its own little parcel of guilt as well as pleasure.

But a new scientific study – reported briefly  in this week’s New Scientist and published in Science Xpress (the abstract is here) suggests that good novels really do broaden the mind and allow us to better understand fellow human beings.

A study in which volunteers were randomly divided into one of three groups – readers of (quality) literary fiction, readers of popular fiction and non-readers showed that readers of literary fiction were later better able to empathise with other people based on the others’ facial expression (a sign of the so-called ‘theory of mind’ – in other words how you feel others minds work).

To an extent this feels like the confirmation by science of what is fairly or intuitively obvious – surely we have all read novels that have changed the way we feel about the world and other people. In the last few years I can think of The Go-Between and Crime and Punishment as two personal examples, but there are plenty more – for instance Things Fall Apart is brilliant for the way it explores the psychological impact of colonisation.

 

Update: You may have noticed I have written ‘three’ groups, while the abstract mentions five – the New Scientist says three groups, which is where I picked this up from.

Dietary myths debunked by the New Scientist


Body Mass Index (BMI)
Body Mass Index (BMI) (Photo credit: Wikipedia)

I always think it’s good to get rid of myths about human diet – so here are six care of last week’s New Scientist.

1. Drink eight glasses of water per day

Turns out we get plenty of water from food and drinks such as tea and coffee (the idea these dehydrate is also debunked).

2. Sugar makes children hyperactive

No scientific evidence for this one at all (to be honest I have always associated this with America – not really a claim you see made in Britain in any case).

3. “Detox diets” get rid of poisons such as PCBs.

Apparently it would take six – ten years of zero exposure to get rid of just half of these sort of chemicals from our muscles. As zero exposure is not possible, neither is that. As for a six week diet, forget it. You can, of course, stop smoking and cut down on drinking. But it is for regulators to cut our exposure to harmful chemicals, a diet is not going to cut it.

4. Antioxidant supplements help you live longer

Scientific studies show that taking antioxidant supplements may actually impair your body’s defences by weakening the natural mechanisms that manufacture these in our cells to tackle free radicals.

5. Being a bit overweight means you will die sooner.

Obesity is one thing – being overweight another. Obesity, certainly a BMI over 35, is correlated with higher risk of premature death. But a BMI of 25 – 29 is a different matter. But being overweight may make you more susceptible to illnesses that affect the quality of life, but there is no evidence to suggest it increases mortality.

6. The “paleo diet” is the way to go

We have no great idea what was eaten in the stone age, or even how healthy those who lived then really were. What is more humans have evolved the genetic ability to digest some of the foods the “paleo diet” suggests we should avoid – indicating a flawed argument (indeed the scientists on whose work the original claims for the paleo diet were based have revised their ideas to account for this – the diet’s advocates are seriously trailing the evidence).

“Crowd sourcing” to play a key role in fundamental physics experiment


Cern accelerators
Cern accelerators (Photo credit: Cédric.)

Ordinary people are to be asked to make a contribution to an experiment which aims to determine key facts about the nature of the physical universe – reports the New Scientist.

Particle physicists at CERN – the join European experiment famous for the Large Hadron Collider – are conducting an experiment – AEgIS – into whether anti-matter interacts with the gravitational field in the same way as matter.

Most of our universe seems to be made of matter – a mystery in itself because there is no simple explanation why ‘matter’ should outnumber ‘anti-matter’ – and the two forms annihilate one another in a burst of energy when they meet – so it can be difficult to conduct experiments with anti-matter.

Anti-matter particles pair up with matter – so for the electron, the negatively charged particle in our everyday atoms, there is an anti-matter positron, which is a positively charged particle which looks like an electron except it appears to ‘go backwards’ in quantum physics experiments (i.e. if we show an electron carrying negative charge in one direction, we can show a positron going in the opposite direction – and backwards in time! – with out violating physics’ fundamental laws). Richard Feynman’s brilliant QED – The Strange Theory of Light and Matter is strongly recommended if you want to know more about that.

Conventionally it is assumed gravity interacts with matter and ant-matter in the same way, but in reality our deep physical understanding of gravity is poor. For while Einstein’s general relativity theory – which describes gravity’s impact and has stood up to every test thrown at it – is widely seen as one of the great triumphs of 20th century physics, it is also fundamentally incompatible with how other “field” theories (like that for electricity) work and as a force is much. much weaker than the other fundamental forces – all of which suggest there is a deeper explanation waiting to be found for gravity’s behaviour.

Showing that anti-matter interacted with the gravitational field in a different way from matter could open up huge new theoretical possibilities. Similarly, showing anti-matter and matter were gravitationally equivalent would help narrow down the holes in our theoretical understanding of gravity.

How can the public help? Well, on 16 August (just after the New Scientist article was printed) CERN asked for the public’s help in tracing the tracks made by particles in experiments: these tracks are then analysed to judge how gravity impacted on the particles (some of which will be anti-matter).

The public can help CERN analyse many more tracks and – crucially – help calibrate CERN’s computer analysis software.

It is expected that there will be further requests for help – so it might be worth keeping your eyes on the AEgIS site if you are interested in helping. (The tutorials are still up. but all the current tasks have been completed).

 

Incompleteness in the natural world


Gödel Incompleteness Theorem
Gödel Incompleteness Theorem (Photo credit: janoma.cl)

A post inspired by Godel, Escher, Bach, Complexity: A Guided Tour, an article in this week’s New Scientist about the clash between general relativity and quantum mechanics and personal humiliation.

The everyday incompleteness: This is the personal humiliation bit. For the first time ever I went on a “Parkrun” today – the 5km Finsbury Park run, but I dropped out after 2.5km 2km – at the top of a hill and about 250 metres from my front door – I simply thought this is meant to be a leisure activity and I am not enjoying it one little bit. I can offer some excuses – it was really the first time ever I had run outdoors and so it was a bit silly to try a semi-competitive environment for that, I had not warmed up properly and so the first 500 metres were about simply getting breathing and limbs in co-ordination – mais qui s’excuse, s’accuse.

But the sense of incompleteness I want to write about here is not that everyday incompleteness, but a more fundamental one – our inability to fully describe the universe, or rather, a necessary fuzziness in our description.

Let’s begin with three great mathematical or scientific discoveries:

The diagonalisation method and the “incompleteness” of the real numbers: In 1891 Georg Cantor published one of the most beautiful, important and accessible arguments in number theory – through his diagonalisation argument, that proved that the infinity of the real numbers was qualitatively different from and greater than the infinity of the counting numbers.

The infinity of the counting numbers is just what it sounds like – start at one and keep going and you go on infinitely. This is the smallest infinity – called aleph null (\aleph_0 ).

Real numbers include the irrationals – those which cannot be expressed as fractions of counting numbers (Pythagoras shocked himself by discovering that \sqrt 2 was such a number). So the reals are all the numbers along a counting line – every single infinitesimal point along that line.

Few would disagree that there are, say, an infinite number of points between 0 and 1 on such a line. But Cantor showed that the number was uncountably infinite – i.e., we cannot just start counting from the first point and keep going. Here’s a brief proof…

Imagine we start to list all the points between 0 and 1 (in binary) – and we number each point, so…

1 is 0.00000000…..
2 is 0.100000000…..
3 is 0.010000000……
4 is 0.0010000000….
n is 0.{n – 2 0s}1{000……}

You can see this can go on for an infinitely countable number of times….

and so on. Now we decide to ‘flip’ the o or 1 at the index number, so we get:

1 is 0.1000000….
2 is 0.1100000….
3 is 0.0110000….
4 is 0.00110000….

And so on. But although we have already used up all the counting numbers we are now generating new numbers which we have not been able to count – this means we have more than \aleph_0 numbers in the reals, surely? But you argue, let’s just interleave these new numbers into our list like so….

1 is 0.0000000….
2 is 0.1000000…..
3 is 0.0100000….
4 is 0.1100000….
5 is 0.0010000….
6 is 0.0110000….

And so on. This is just another countably infinite set you argue. But, Cantor responds, do the ‘diagonalisation’ trick again and you get…

1 is 0.100000…..
2 is 0.110000….
3 is 0.0110000….
4 is 0.1101000…
5 is 0.00101000…
6 is 0.0110010….

And again we have new numbers, busting the countability of the set. And the point is this: no matter how many times you add the new numbers produced by diagonalisation into your counting list, diagonalisation will produce numbers you have not yet accounted for. From set theory you can show that while the counting numbers are of order (analogous to size) \aleph_0 , the reals are of order 2^{\aleph_0} , a far far bigger number – literally an uncountably bigger number.

Gödel’s Incompleteness Theorems: These are not amenable to a blog post length demonstration, but amount to this – we can state mathematical statements we know to be true but we cannot design a complete proof system that incorporates them – or we can state mathematical truths but we cannot build a self-contained system that proves they are true. The analogy with diagonalisation is that we know how to write out any real number between 0 and 1, but we cannot design a system (such as a computer program) that will write them all out – we have to keep ‘breaking’ the system by diagonalising it to find the missing numbers our rules will not generate for us. Gödel’s demonstration of this in 1931 was profoundly shocking to mathematicians as it appeared to many of them to completely undermine the very purpose of maths.

Turing’s Halting Problem: Very closely related to both Gödel’s incompleteness theorems and Cantor’s diagonalisation proof is Alan Turing’s formulation of the ‘halting problem’. Turing proposed a basic model of a computer – what we now refer to as a Turing machine – as an infinite paper tape and a reader (of the tape) and writer (to the tape). The tape’s contents can be interpreted as instructions to move, to write to the tape or to change the machine’s internal state (and that state can determine how the instructions are interpreted).

Now such a machine can easily be made of go into an infinite loop e.g.,:

  • The machine begins in the ‘start’ state and reads the tape.  If it reads a 0 or 1 it moves to the right and changes its state to ‘even’.
  • If the machine is in the state ‘even’ it reads the tape. If it reads a 0 or 1 it moves to the left and changes its state to ‘start’

You can see that if the tape is marked with two 0s or two 1s or any combination of 0 or 1 in the first two places the machine will loop for ever.

The halting problem is this – can we design a Turing machine that will tell us if a given machine and its instructions will fall into an infinite loop? Turing proved  we cannot without having to discuss any particular methodology … here’s my attempt to recreate his proof:

We can model any other Turing machine though a set of instructions on the tape, so if we have machine T we can have have it model machine M with instructions I : i.e., T(M, I)

Let us say T can tell whether M will halt or loop forever with instructions I – we don’t need to understand how it does it, just suppose that it does. So if (M, I) will halt T writes ‘yes’, otherwise it writes ‘no’.

Now let us design another machine T^\prime that takes T(M,I) its input but here T^\prime loops forever if T writes ‘yes’ and halts if T writes ‘no’.

Then we have:

M(I) halts or loops – T(M, I) halts – T^\prime loops forever.

But what if we feed T^\prime the input of T^\prime(T(M, I)?

M(I) halts or loops – T(M, I) halts – T^\prime(T(M,I)) loops forever – T^\prime(T^\prime(T(M,I))) – ??

Because if the second T^\prime(T^\prime(T(M,I))) halted then that would imply that the first had halted – but it is meant to loop forever, and so on…

As with Gödel we have reached a contradiction and so we cannot go further and must conclude that we cannot build a Turing machine (computer) that can solve the halting problem.

Quantum mechanics: The classic, Copenhagen, formulation of quantum mechanics states that the uncertainty of the theory collapses when we observe the world, but the “quantum worlds” theory suggests that actually the various outcomes do take place and we are just experiencing one of them at any given time. The experimental backup for the many worlds theory comes from quantum ‘double-slit’ experiments which suggest particles leave traces of their multiple states in every ‘world’.

What intrigues me: What if our limiting theories – the halting problem, Gödel’s incompleteness theorem, the uncountable infinite, were actually the equivalents of the Copenhagen formulation and, in fact, maths was also a “many world” domain where the incompleteness of the theories was actually the deeper reality – in other words the Turing machine can both loop forever and halt? This is probably, almost certainly, a very naïve analogy between the different theories but, lying in the bath and contemplating my humiliation via incompleteness this morning, it struck me as worth exploring at least.