You know how it is … you go for a run and then lying in the bath you read a New Scientist article about Dark Energy and you think of two crazy ideas which you hope some respectable scientist will at least have stuck a paper on arXiv on, so you can at least say “I thought of that in the bath and it might even be right…”
Except that you can find no such papers, so you are reduced to looking like the crackpot you are by posing them here:
Inertial mass is caused by the gravitational field of a certain amount of matter that has been trapped in collapsed dimensions. Those dimensions are always the same distance away from any given point, so inertial mass is the same anywhere in the universe.
Dark energy is caused by the ‘evaporation’ via Hawking radiation or similar of our universe (sadly I am not the first to have thought of this particular piece of crack-pottery, so I won’t be collecting a Nobel prize for it). Further searching reveals there is even an arXiv paper on such an idea after all.
Last week’s New Scientist reports that Russia Today – the Kremlin’s propaganda channel subsidised to broadcast lies in support of the Russian Federation’s hostility to any country in Russia’s “near abroad” that dares to travel down the path of democracy and the rule of law – went one further when it started churning out stories about how the Zika outbreak was a result of a failed science experiment.
The basis of their report was “the British dystopian TV series Utopia“. Yes, they broadcast fiction as news and for once it was not a question of interpretation.
Here’s the product description from Amazon:
The Utopia Experiments is a legendary graphic novel shrouded in mystery. But when a small group of previously unconnected people find themselves in possession of an original manuscript, their lives suddenly and brutally implode.
Targeted swiftly and relentlessly by a murderous organisation known as The Network, the terrified gang are left with only one option if they want to survive: they have to run. But just as they think their ordeal is over, their fragile normality comes crashing down once again.
The Network, far from being finished, are setting their destructive plans into motion. The gang now face a race against time, to prevent global annihilation.
Scientists think the most likely reason is that the skin of Iranians was much less exposed to the Sun and consequently vitamin D production (as the New Scientist notes technically “vitamin D” produced in this way is not a vitamin at all, but that’s a different story) fell. The evidence that vitamin D production is closely linked to a variety of autoimmune diseases, including MS, is also growing.
(Before any of my “political” friends think I have obviously suffered a serious blow to the head, I am talking about his theories on grammar and not his idiotic politics…)
In the late 1950s Noam Chomsky proposed that we have a natural capacity to process grammar and thus to use language – in essence that our brain is hard-wired to use language.
It was, and is, a controversial theory (though Chomsky would not agree), but this week new evidence has been published to support it – and, as outlined in the New Scientist, you can even conduct a thought experiment on yourself to test it.
Writing in the Proceedings of the National Academy of Sciences (the US National Academy that is), Jennifer Culbertson and David Adger consider whether language learners pick up language patterns by observing statistical patterns from existing speakers/users or – as the Chomskian theory would suggest – apply some form of “hard wired” rule to process grammar.
To do this they presented subjects (English speakers) with a “new” limited language based on common words in English. The subjects were then asked to judge whether a new phrase in this “language” – made by combining elements of the limited language they had already seen – would be correct in one of two forms. If they picked one form then they would likely be using some form of statistical inference – picking a form that looked closest to the forms they had already seen – if they picked another they were likely using an internal grammar machine in their brains.
And this is where you can test yourself … (shamelsssly nicked from the New Scientist as this example does not appear to be in the article itself):
Here are two phrases in the new language:
So which of the following phrases is correct in this language:
shoes two blue
shoes blue two
If, as I did, you picked “shoes blue two” and not “shoes two blue” then you are favouring a semantic hierarchy and not a frequency based approach – in English two usually precedes blue, but blue is a stronger modifier of the noun than two.
In fact people chose the semantic hierarchy about 75% of the time – strongly suggesting that we do have a internal grammar engine running inside our heads.
(Chomsky himself appears to be dismissive of the study, despite it appearing confirm his work – “like adding a toothpick to mountain”. Tells you quite a lot about him, I think.)
What are the practical implications? I think it points to a limit to the effectiveness of things like big data based machine translation, if all that relies on is statistical inference. Inside a decade big data has made machine translation much more practical than the previous 50 years of AI research, but the quest for a way to compute grammar is still going to matter.
Well, the answer is pretty plain: Einstein‘s theory of general relativity – which even in the last month has added to it’s already impressive list of predictive successes – tells us that to travel at the speed of light a massive body would require an infinite amount of propulsive energy. In other words, things are too far away and travel too slow for us to ever hope to meet aliens.
But what if – and it’s a very big if – we could communicate with them, instantaneously? GR tells us massive bodies cannot travel fast, or rather along a null time line – which is what really matters if you want to be alive when you arrive at your destination – but information has no mass as such.
Intriguingly, an article in the current edition of the New Scientist looks at ways in which quantum entanglement could be used to pass information – instantaneously – across any distance at all. Quantum entanglement is one of the stranger things we can see and measure today – Einstein dismissed it as “spooky interaction at a distance” – and essentially means that we can take two similar paired particles and by measuring the state of one can instantaneously see the other part of the pair fall into a particular state (e.g., if the paired particles are electrons and we measure one’s quantum spin, the other instantly is seen to have the other spin – no matter how far away it is at the time).
Entanglement does not allow us to transmit information though, because of what the cosmologist Antony Valentini calls, in an analogy with thermodynamic “heat death”, the “quantum death” of the universe – in essence, he says that in the instants following the Big Bang physical particles dropped into a state in which – say – all electron spins were completely evenly distributed, meaning that we cannot find electrons with which to send information – just random noise.
But – he also suggests – inflation – the super-rapid expansion of the very early universe may also have left us with a very small proportion of particles that escaped “quantum death” – just as inflation meant that the universe is not completely smooth because it pushed things apart at such a rate that random quantum fluctuations were left as a permanent imprint.
If we could find such particles we could use them to send messages across the universe at infinite speed.
Perhaps we are already surrounded by such “messages”: those who theorise about intelligent life elsewhere in the universe are puzzled that we have not yet detected any signs of it, despite now knowing that planets are extremely common. That might suggest either intelligent life is very rare, or very short-lived or that – by looking at the electromagnetic spectrum – we are simply barking up the wrong tree.
Before we get too excited I have to add a few caveats:
While Valentini is a serious and credible scientist and has published papers which show, he says, the predictive power of his theory (NB he’s not the one speculating about alien communication – that’s just me) – such as the observed characteristics of the cosmic microwave background (an “echo” of the big bang) – his views are far from the scientific consensus.
To test the theories we would have to either be incredibly lucky or detect the decay products of a particle – the gravitino – we have little evidence for beyond a pleasing theoretical symmetry between what we know about “standard” particle physics and theories of quantum gravity.
Even if we did detect and capture such particles they alone would not allow us to escape the confines of general relativity – as they are massive and so while they could allow two parties to theoretically communicate instantly, the parties themselves would still be confined by GR’s spacetime – communicating with aliens would require us and them in someway to use such particles that were already out there, and perhaps have been whizzing about since the big bang itself.
But we can dream!
Update; You may want to read Andy Lutomirski’s comment which, I think it’s fair to say, is a one paragraph statement of the consensus physics. I am not qualified to say he’s wrong and I’m not trying to – merely looking at an interesting theory. And I have tracked down Anthony Valentini’s 2001 paper on this too.
In recent recent weeks, in the UK, there has been renewed interest in the question of heritability and educational performance, after Dominic Cummings, the outgoing advisor to Michael Gove, the education secretary, claimed that some sort of left wing conspiracy in the educational establishment – “the blob” as Cummings calls it – were resisting the facts of science over the issue.
Tory house journal The Spectator joined in the debate, publishing a piece by psychology lecturer Kathryn Asbury which talks of a “genetically sensitive school”. I don’t know about you but that sounds like nothing good to me.
Psychometricians have by and large settled on a figure of 50 per cent for heritability based on what is now seen as a simplistic calculation that variance in a given environment for a trait – such as IQ – equals the sum of genetic and environmental contributions, plus a small component for the interaction of these two inputs. Robert Plomin, Gove’s behavioural genetics advisor and a prominent spokesman for this long psychometric tradition, puts it higher, at around 70 per cent, the figure cited by Cummings.
However, the calculation is almost meaningless. It depends on there being a uniform environment – fine if you are studying crop or milk yields, where you can control the environment and for which the measure was originally derived, but pretty useless when human environments vary so much. Thus some studies give a heritability estimate of 70 per cent for children in middle class families, but less than 10 per cent for those from poor families, where the environment is presumably less stable. And it is a changing environment, rather than changing genes, which must account for the increase in average IQ scores across the developed world by 15 points over the past century, to the puzzlement of the determinists.
As a part-time PhD student with a full-time job, choosing what to read often feels like a moral dilemma as much as anything else. That book on MPI Programming? On the Irish War of Independence and Civil War, or one of the many novels I have bought and not got round to reading. Each carries its own little parcel of guilt as well as pleasure.
But a new scientific study – reported briefly in this week’s New Scientist and published in Science Xpress (the abstract is here) suggests that good novels really do broaden the mind and allow us to better understand fellow human beings.
A study in which volunteers were randomly divided into one of three groups – readers of (quality) literary fiction, readers of popular fiction and non-readers showed that readers of literary fiction were later better able to empathise with other people based on the others’ facial expression (a sign of the so-called ‘theory of mind’ – in other words how you feel others minds work).
To an extent this feels like the confirmation by science of what is fairly or intuitively obvious – surely we have all read novels that have changed the way we feel about the world and other people. In the last few years I can think of The Go-Between and Crime and Punishment as two personal examples, but there are plenty more – for instance Things Fall Apart is brilliant for the way it explores the psychological impact of colonisation.
Update: You may have noticed I have written ‘three’ groups, while the abstract mentions five – the New Scientist says three groups, which is where I picked this up from.