Actually, I didn’t realise I had bought a fax machine until the laser printer I knew I had bought turned up and I read on the packaging that it was also a fax machine.
(Fax is perhaps the most disruptive technology I’ve seen rise and fall in my time as an adult – I last used one in 2005 as far as I can recall but only 15 years earlier they were seen as cutting edge – but no matter…)
Are printers like fax machines in another way too? Destined to all but disappear as the tyranny of the screen grows ever stronger? The reasons that motivated me to buy the laser printer (and not just another cheap inkjet that produces shoddy output and falls apart after a few months) make me think not.
Paper is much more flexible than a screen – try scribbling a note on your screen and see how that goes.
Paper is the ‘rest energy’ form – it’s true that printing a page takes a lot more energy than clicking on a HTML link, but paper is more or less the zero energy, zero technology form of reading something – it’s generally easier to do than reading something on a screen (and if you drop a page you don’t generally risk losing you ability to read either until you buy a new set of eyes).
Paper’s flexibility makes it easier to see links – this is a killer application for paper in my field of software engineering (though maybe not all engineers would agree) – you can see much more information at once.
Too many screens aren’t really very good for reading – too many screens on small devices just aren’t very good for reading text. When we print something we generally print it at a size that’s optimised for reading.
Screens tire your eyes in the way that paper just doesn’t.
Getting a laser printer as opposed to an ink jet feels like a bit of an indulgence but as every other cheaper printer we’ve had over the years has generally fallen apart quickly I am hoping it is going to deliver long-term satisfaction.
We were given an Echo Dot for Christmas. And it’s just brilliant.
I have to admit I was pretty cynical – my principal experience with Apple’s voice activated “Siri” is that it doesn’t understand my accent (even though you can select an Irish voice for the output, forget about it for the input.)
But this is really great. It sits in the kitchen and has essentially replaced the digital radio and the fact that you can ask it (simple) things is a bonus.
One of the best things about it is that it allows me to spend 10 – 15 minutes listening to “Morning Ireland” on RTÉ Radio 1 every morning as I eat my toast – easy access to that perspective on world events (and on what the only country with a land border with the UK thinks about what is happening here) is a great thing to have.
Recently I had to write some code to generate a pseudorandom number in a system with very limited sources of entropy. So, of course I turned to Donald Knuth and, in particular, Volume 2 – Seminumerical Algorithms – in the magisterial The Art of Computer Programming.
Reading through the questions/exercises I then came across this one:
Prove that the middle-square method using 2n-digit numbers to the base b has the following disadvantage: if the sequence includes any number whose most significant n digits are zero, the succeeding numbers will get smaller and smaller until zero occurs repeatedly.
(Knuth rates this question as ’14’ and, using his scale of difficulty which places a ’10’ as a minute to solve and a 20 as twenty minutes, probably means this should take about 7 or 8 minutes but I’ve spent much, much longer on it than that!)
A quick explanation: the middle-square method is a naive (though actually first suggested by none other than John von Neumann) random number generation method where we take a number n-digits long, square it and take the middle n-digits as our next seed or random number.
At first I thought I’d found an example where Knuth’s proposition appears to be false.
Let and the seed number be then every subsequent number in the sequence is also 60 (obviously that’s as useless as repeated zeros for a random number generator.) But the problem with that is, that although it demonstrates a weakness in the middle square method, it doesn’t fit Knuth’s definition of the problem. What is here? If ( is the minimum for middle values), then $N_o = 0060 $ and and so (thanks to Hagen von Eitzen for clarifying this for me.)
So let’s look at the general case. (I also this explanation to Hagen von Eitzen, as compared to my very long-winded first attempt – though any errors that follow are mine not his.)
So we have a number which we think of as a digit number – though the first digits are 0. Then (as the largest can be is .)
Thus as the largest can be is then:
If we have a digit number then the biggest number of digits we can get from the out put is , but in our case we only have digits to worry about so the biggest size can be is digits, as again the leading digits will be 0.
So, to apply the middle square method we need to lose the lower digits – i.e., take .
From the above:
If then will always , so the sequence decreases until .
Far too much debate in the UK about responding to SARS-CoV-2 has been about short-termist responses. So, for instance, just as every lockdown has begun to have real effect it has been lifted in the name of the economy.
The result has been we are now in the longest lock-down of all, we’ve got the deepest economic setback of any major economy (though obviously there are other things going on to cause that too) and we have one of the highest death rates in the world. Not good.
It ought to be becoming clearer to more people that, actually, even after a mass vaccination programme (the one area where the UK has, thankfully, done well), the virus will not be gone from our lives. I don’t think there will, in my lifetime, be a return to what was fully “normal” as recently as December 2019.
Over time we can expect, as a species, to see the threat from the virus diminish as, like the common cold, more children who catch it while young grow to adulthood with a fully primed immune system. For the rest of us there will be vaccines – and there will also be mutations that may threaten our vaccine-acquired immunity.
We cannot stop dangerous mutations arising – so long as the virus is in circulation it will mutate and, if a mutation improves the virus’s ability to evade vaccines, those mutations will spread.
We can, though, slow the speed of the spread of any mutation through – you guessed it – social distancing, mask wearing and test-and-trace protocols. So these may eventually be relaxed as vaccination reaches more and more people, but it is hard to see them ever going away completely. I don’t expect you are going to be let into a hospital without wearing a mask for very many years to come, for instance.
Once we come to terms with the fact that we are here for the long haul we need to start reordering our society in that light. One of the things that surely must follow is some form of immunity/vaccination passport.
Until recently I thought this was a terrible idea – but since I recognised the truly long-term nature of the threat I have come to see such passports as inevitable, and necessary, and so the key issue is how they are introduced and used.
My initial thoughts are that firstly they should be a citizen’s right – everyone should be able to get one and access shouldn’t depend on wealth.
Secondly they should be regulated to an international standard that, as far as is practical, protects privacy and avoids unnecessary state monitoring. Or to be more direct: if the Russian (or any other) state wants to insist its citizens carry the equivalent of an electronic tag with them everywhere there isn’t much we can do to stop it, but we could say such devices are not recognised for use here.
Thirdly – and related to the first point – with the obligation to have one should come the right to access services. Public bodies or other service providers might have legitimate reasons to restrict access to those who have been vaccinated or are otherwise certificated, but they should not be able to refuse access to anyone who meets the criteria either. In other words if your body or company requires access to the information the passport contains then it must also submit to the responsibilities that come with it.
Surprised and pleased to find that, a quarter of a century after I released it to a distinctly unmoved world – and a decade after I first mentioned it on this blog – the first piece of software I published, a not particularly brilliant program that allowed you to predict the result in a given UK constituency from a national opinion poll, is still available on an FTP server – ftp://ftp.demon.nl/pub/Museum/Demon/ibmpc/win3/apps/election/
Can’t actually run this on a 64 bit Windows system and the source (in Borland C++) is long-gone…
In general I hate the term “neo-liberal” – as in the last decade it has become a fashionable way for some people on the left to say “things I don’t like” whilst giving their (often irrational) personal preferences a patina of intellectual credibility.
Glen O’Hara looks at the accusation that the last Labour government was “neoliberal” in some detail and I’m not going to reheat his arguments here, but as he says:
This rise in public spending was not only imagined in liberal terms—as a new contract between consumers and providers. For the emphasis on neoliberalism also misses the fact that the Blair agenda sought specifically to rebuild the public sphere around a new vision of a larger, more activist but more responsive and effective state. First through targets—and then, when they seemed not to deliver strong improvement, through decentralised commissioning and choice—the government sought to improve public-sector performance in a way that would be visible on the ground, and so maintain its relevance and political support.
But the term is not in itself meaningless – but personally I find it much more useful as a tool of analysis when applied to how governments across much of the western world have approached the private (and not the public) sector over the last forty years. For sure there has been privatisation too – expanding the role of the private sector, but certainly in the UK the long-term picture has not been a story of shrinking state, but of a state spending money in different ways. See the chart which plots the share of public spending as a proportion of GDP since the end of the 1950s – and notably this does not include the massive covid-19 driven spike of public spending in 2020 and 2021.
What has diminished is both state-ownership and (much more more importantly, I believe) state-partnership with key economic sectors that provide private goods and services – until, perhaps, that is, today (as I discuss below).
As my example (from the US but the argument applies more widely), let me look at AT&T in the United States. Today what was once the American Telephone and Telegraph Company is still the world’s largest telecommunications concern, but it’s a very different beast to the company of that name of forty years ago. Now it competes in a cut-throat global market, then it was a highly-regulated, privately-owned classical monopoly utility.
No doubt its break-up from 1984 onwards meant Americans got smaller phone bills (if they use land lines at all) but what has the overall balance for society been?
Reading Brian Kernighan’s UNIX: A History and a Memoir and the earlier The Idea Factory you get the impression that subjecting corporates to cut-throat competition has not all been about wins for the consumer. The “Bell System” monopoly paid for a massive research operation that delivered the transistors that made the digital age possible and the Unix that now dominates, and an awful lot else besides.
AT&Ts share holders didn’t repeat the massive windfalls seen by people who invested in Amazon twenty years ago but their stocks paid a consistent dividend and the economy in which they operated also generally grew steadily. Investors got a stable return and AT&T also had the ability to risk capital on long-term research and development.
The neoliberal revolution in the private sector has indeed given us Amazon (and Apple) and with it massive disruption that often is beneficial to humanity as a whole (think of the evaporation of poverty in much of east and south east Asia). But has it delivered fundamental advances in human knowledge of the scale and power that the older regulated capitalism did? I feel less than fully convinced.
The counter-case is, of course, in the field of bio-medicine. The enormous productive power that a globalised capitalism possesses is, even as I write this, churning out the product of the most spectacular scientific effort in all human history – vaccine against covid-19. No previous human generation has been able to do what we now believe we – for very good reasons – can: meet a newly emerged global epidemic in open combat and win.
But the story of the vaccine is also a story of partnership between state and capital. Governments have shared risk with the pharmaceutical companies but competition has also played its part – to me it suggests a future beyond a neo-liberal approach to the private sector in key industrial areas. The state should not be trying to pick winners but sharing risks and building an economic eco-structure where a balance of risks and rewards means that the aim is not to find the next 10000% return on investment but where good research can be allowed to thrive.
I know this is in danger of sounding very motherhood-and-apple-pie and we should be weary of just propping up existing market giants because they happen to be market giants. So let me also make a suggestion – imagine if the UK government decided that, instead of spending large amounts for ever on office suites from large software houses that are installed on million upon million of computers in schools, hospitals and police stations, it indicated it was willing to pay a premium price for a service contract, for say 5 – 7 years for someone who could turn one of the existing free software office suites into a world-class competitor and, more than that, it was willing to provide capital, as an active investor, in the two or three companies that could come forward with the best initial proposals?
The private sector would be shouldering much of the risk but would be aiming for a good reward (while free software’s built-in escrow mechanism would also mean that the private contractor couldn’t just take the money and ‘steal’ the outcome). Ultimately citizens (globally) could expect to see real benefits and, of course, we would hope any current monopolist would see competition coming and be incentivised to innovate further.
London, where I am writing this, is now perhaps the global centre of the covid19 pandemic, thanks to a mutation of the virus that has allowed it to spread more easily. This mutation may not have come into existence in the South East of England but it has certainly taken hold here, and about 2% of London’s population currently have symptomatic covid.
In response all primary and secondary schools, which were due to open tomorrow, will be effectively closed and teaching will go online.
Suddenly the availability of computing resources has become very important – because unlike the Spring lockdown, where online teaching was (generally) pretty limited, this time around the clear intention is to deliver a full curriculum – and means one terminal per pupil. But even now how many homes have multiple computers capable of handling this? If you have two children between the ages of 5 and 18, and two adults working from home it is going to be a struggle for many.
Thus this could have been the moment that low cost diskless client devices came into their own – but (unless we classify mobile phones as such) they essentially don’t exist. The conditions for their use have never been better – wireless connections are the default means of connecting to the internet and connections are fast (those of us who used to use X/Windows over 28kbit dial-up think so anyway).
Why did it not happen? Perhaps because of the fall in storage costs? If the screen and processor costs haven’t fallen as fast as RAM and disk then thin clients get proportionally more expensive. Or perhaps it’s that even the fat clients are thin these days? If you have a £115 Chrome book then it’s probably not able to act realistically as a server in the way a laptop costing six times as much might.
But it’s also down to software choices and legacies. We live in the Unix age now – Android mobile phones and Mac OSX machines as well as Linux devices are all running some version of an operating system that was born out of an explicit desire to create an effective means to share time and resources across multiple users. But we are also still living in the Microsoft Windows era too – and although Windows has been able to support multiple simultaneous users for many years now, few people recognise that, and even fewer know how to activate it (especially as it has been marketed as an add-on and not the build in feature we see with Unix). We (as in the public at large) just don’t think in terms of getting a single, more powerful, device and running client machines on top of it – indeed most users run away at the very idea of even invoking a command line terminal so encouraging experimentation is also difficult.
Could this ever be fixed? Well, of course, the Chrome books are sort of thin clients but they tie us to the external provider and don’t liberate us to use our own resources (well not easily – there is a Linux under the covers though). Given the low cost of the cheapest Chrome books its hard to see how a challenger could make a true thin-client model work – though maybe a few educational establishments could lead the way – given pupils/students thin clients that connect to both local and central resources from the moment they are switched on?
So if I cheat – just a little bit by rounding predictions to the nearest 1% – and go with the “reversion to the mean” approach to state swings (see the discussion in the previous post), my model doesn’t actually look too bad:
So I made a bit of a mess with this the the first time round and when it ended up on Slashdot (I had assumed that wasn’t happening as the delay between positing and it appearing was a few days), I was left rather embarrassed.
With a thanks to all those who commented and pointed out the errors – here’s my attempt to do it properly. I haven’t agreed with all the comments – eg it was pointed out RAID wasn’t an option in the mid 80s so shouldn’t be part of a comparison now, but I think this is about real world choices, so RAID would probably feature, etc…
It then compared the cost of computing power (which was estimated to cost about $50,000 per MIPS) and memory – eg if you compressed data to save on memory space you will have to use additional computing power to access the data. Here the trade off is calculated to be about 5 bytes of memory per instruction per second.
What do these comparisons look like now? The original paper explicitly ruled out applying these sort of comparisons to PCs – citing limited flexibility in system design options and different economics. But we won’t be so cautious.
The five minute rule updated
Let us consider a case with 2TB SSD disks. These cost about £500 (and probably about $500, we are approximating) and let’s say we are going with a RAID 5 arrangement – so actually we need 4 ‘disks per disk’ (£2000) and the cost is then 0.0001 penny per kilobyte (about 4 orders of magnitude less than 35 years ago).
As discussed above I am using the RAID figure even though RAID wasn’t a practical option in 1985 because I think this is about real world choices not theoretical limitations.
And memory – for simplicity we are going to say 128GB of DRAM costs us £1000 (an over-estimate but fine for this sort of calculation). That means memory costs about 0.001p per KB – a fall of around 7 orders of magnitude.
Using the same regimen as the original paper – but considering 4KB pages – and assuming that the disk system supports 10000 accesses per second then the cost for a disk is about 5p/a/s (about 5 – 6 orders of magnitude less than in 1985). We ignore the costs of supporting a disk controller here but conceivably we might want to add another few pence to that figure.
If we then think that making one 4KB page resident in memory saves 1 access per second, the disk cost saved is 5p at the cost of 0.004p. Saving 0.1 accesses per second saves 0.5p at the cost of 0.004p and the break even point is roughly 0.0008 accesses per second – or alternatively we need to hold pages in memory for 1/0.0008 seconds – 1250 seconds or about 20 minutes.
Alternatively this means caching systems should aim to hold items that are accessed every 20 minutes or so.
And the trade off between computing power and memory…
As mentioned above, back in 1985, computing power was estimated to cost about $50,000 per Million Instructions Per Second (MIPS). These days single core designs are essentially obsolete and so it’s harder to put a price on MIPS – good parallel software will drive much better performance from a set of 3GHz cores than hoping a single core’s 5GHz burst speed will get you there. But, as we are making estimates here we will opt for 3000 MIPS costing you £500 and so a single MIPS costing 17p, and a single instruction per second costing (again approximating) 0.00002 pence. (Contrast this with a low-end microcontroller which might give you an IPS for about 0.00003p or a bit less).
Computing power has thus become about 5 orders of magnitude cheaper – but as we note above memory prices have fallen at an even faster rate.
Now an instruction costs 0.00002p and a byte costs 0.0000001p, so we need to save about 200 bytes to make an additional instruction worthwhile – meaning that cost-efficient data compression of easily accessible memory is hard to do.