Microsoft versus Linux: did we win after all?


At the end of John Le Carre’s Smiley’s People George Smiley is congratulated for having triumphed in his life’s struggle with Karla, the eminence grise of the KGB and told “George, you won”, to which the British spymaster, perhaps shamed by his need to adopt his opponent’s tactics of threat to the innocent replies “Did I?”

It feels a bit like that this weekend when I look back what is surely Microsoft’s humbling in the face of Android’s triumph. (I don’t claim to be any sort of central figure in this – I just mean I know we have won, but I don’t know what we have won given the compromises required to secure victory).

Free software made the victory possible – but the freedom that counted was the ‘as in beer’ one: Linux proved to be a cheaper platform for the hardware manufacturers to use. I do not detect any greater public understanding of the ideas of the free software movement than a decade ago – even if so many of the old arguments against its use have been killed by the onrushing Android juggernaut.

Indeed, the fact that Apple, whose business model is even more fundamentally hostile to free software than that of Microsoft, are doing so well suggests that no sort of ideological battle has been won at all – for so many consumers it is “shiny thing make it all better” (and Apple do do a fine line in shinnies).

And the site of the great battles of the past – the desktop – has become something hardly worth fighting over. Windows 8 stinks – I replaced it on one of my daughter’s computers with Ubuntu recently – but I suspect what has made it such a turkey for Microsoft is not the tiny numbers who, like me, are getting rid of it, but the falling sales of desktops and laptops in the developed world’s markets.

The coming HTML5 disaster


HTML5 official logo (official since 1 April 20...
HTML5 official logo (official since 1 April 2011, (Photo credit: Wikipedia)

About 18 months ago I got my first Android phone. One of the first applications I downloaded on it was for Facebook. It had some quirks but it worked fine.

Not long after I was prompted to ‘upgrade’ to the next version, which I duly did.

The supposed upgrade was (and is) a disaster. Slow, difficult to understand, a mess.

I had always wondered why Facebook had not simply rolled back the upgrade and tried again. But now I know. To cut their costs they had based their iOS and Android applications on a common HTML5 core. A common code base eliminated the need to maintain two separate blocks of complex code, presumably with two sets of developers.

But it didn’t work. By all accounts the iOS version made the Android one look slick and this week it was axed in favour of an Objective C based application. Hopefully a Java based Android replacement is also in the works.

But I suspect sloth will be the least of HTML5’s problems. Turning mark up into executable code just sounds like a recipe for trouble and it’s only just started.

@AmazonKindle have broken their Cloud Reader too


Having checked out the alternatives to the Kindle that are provided by Amazon the current state of play is:

  • Linux: no native alternative offered
  • Windows: Kindle app crashes on 64 bit wine, so cannot tell you what that’s like
  • Mac OSX: seems to render pages perfectly as broken as the Kindle
  • Android: app is as broken as the Kindle itself
  • Cloud Reader: supposedly what Linux users should be using, but this too is broken, failing to render characters properly in the same way as the Kindle.

The binomial distribution, part 1


Lognormal
Image via Wikipedia

I think there are now going to be a few posts here which essentially are about me rediscovering some A level maths probability theory and writing it down as an aid to memory.

All of this is related as to whether the length of time pages are part of the working set is governed by a stochastic (probabilistic) process or a deterministic process. Why does it matter? Well, if the process was stochastic then in low memory situations a first-in, first-out approach, or simple single queue LRU approach to page replacement might work well in comparison to the 2Q LRU approach currently in use. It is an idea that is worth a little exploring, anyway.

So, now the first maths aide memoire – simple random/probabilistic processes are binomial – something happens or it does not. If the probability of it happening in a unit time period is p (update: is this showing up as ‘nm’? It’s meant to be ‘p’!) then the probability it will not happen is 1 - p = q .  For instance this might be the probability that an atom of Uranium 235 shows \alpha particle decay (the probability that one U 235 atom will decay is given by its half-life of 700 million years ie., 2.21\times10^{16} seconds, or a probability, if my maths is correct, of a given individual atom decaying in any particular second of approximately 4.4\times10^{-16} .

(In operating systems terms my thinking is that if the time pages spent in a working set were governed by similar processes then there will be a half life for every page that is read in. If we discarded pages after they were in the system after such a half life, or better yet some multiple of the half life, then we could have a simpler page replacement system – we would not need to use a CLOCK algorithm, just record the time a page entered the system and stick it in a FIFO queue and discard it when the entry time was more than a half life ago.

An even simpler case might be to just discard pages once the stored number reached above a certain ‘half life’ limit. Crude, certainly, but maybe the simplicity might compensate for the lack of sophistication.

Such a system would not work very well for a general/desktop operating system – as the graph for the MySQL daemon referred to in the previous blog shows, even one application could seem to show different distributions of working set sizes. But what if you had a specialist system where the OS only ran one application – then tuning might work: perhaps that could even apply to mass electionics devices, such as Android phones – after all the Android (Dalvik) VM is what is being run each time.)

An example of the poor editing in O’Reilly’s “Programming Android”


IMG_3030s
Image by 小宗宗 via Flickr

OK, I don’t really want to sound like I am bashing this book – Programming Android: Java Programming for the New Generation of Mobile Devices – because, by its very nature, writing a technical book must be highly demanding in terms of accuracy and I see no signs of any mistakes – just what I think is poor editing. See if you agree…

So, the book is discussing how to serialize classes using Android’s Parcelable interface, and makes this, correct point about serializing an enum type:

“Be sure, though, to think about future changes to data when picking the serialized representation. Certainly it would have been much easier in this example to represent state as an int whose value was obtained by calling state.ordinal. Doing so, however, would make it much harder to maintain forward compatibility for the object. Suppose it becomes necessary at some point to add a new state … this trivial change makes new versions of the object completely incompatible with earlier versions.”

But then discussing de-serialization, the book states, without comment:

“The idiomatic way to do this is to read each piece of state from the Parcel in the exact same order it was written in writeToParcel (again, this is important), and then to call a constructor with the unmarshaled [sic] state.”

Now, technically, these passages are not in disagreement – but it is clearly the case that the de/serialization process is highly coupled with the data design – something that ought to be pointed out, especially if we are going to make a big deal of it on the serialization phase.

Making sense of Android’s complex development process


Image representing Android as depicted in Crun...
Image via CrunchBase

Back in about 1997 I bought a book about this new programming environment – it seemed something bigger than a language but smaller than an operating system – called Java.

Back then the idea seemed great – write once, run anywhere – but there was a lot of scepticism and, of course, Microsoft tried to poison the well through the tactics of “embrace and extend” with their J++ offering. All of that made it look as though Java was going nowhere.

I wrote a couple of applets – one was a “countdown” timer for Trevor Philips‘s mayoral election website in 1999, another was a SAX based parser for the largely Perl-based content management system I wrote for the Scottish Labour Party the following year, ahead of the 2001 election. But no one seemed to like applets much – it seems ridiculous now, but the 90K download needed for the SAX parser really slowed down the Scottish party’s site, even though I was pretty proud of the little newsticker it delivered (along with annoying teletype noises as it went). I forgot about Java.

But, of course, that was wrong. Java is programming language du jour these days, though Microsoft’s responses to the success of Java and the failure of J++, C# and .net, are also big.

Android is, of course, Java’s most prominent offer these days – literally millions of people will be running Android apps even as I write this and thousands of Android phones are being bought across the world daily. Time to get reacquainted, especially as my new job is once more about political communications.

But, as I discovered with C++ when I came back to it after over a decade for my MSc, Java has moved on a fair bit in that time and, unlike C++, I cannot say all the progress seems to be positive. Indeed Java seems to thrive on a particularly ugly idiom with developers being encouraged to write constructors of anonymous classes in the headers of functions – ugh.

I can certainly see the beauty of Groovy more clearly than ever, too. Though being an old time Perl hacker makes me resent Java’s heavy duty static typing in any case.

To help me through all this I have been reading O’Reilly‘s Programming Android: Java Programming for the New Generation of Mobile Devices. Now, usually O’Reilly’s books are all but guaranteed to be the best or close to the best of any on offer, but I have my doubts that is the case with this one – it seems to be sloppily edited (eg at different times it is difficult to follow whether one is being advised to use the Android SDK or the Eclipse editor) and falls between being a comprehensive introduction to Android programming and a guide for Java hackers to get on with it. It feels less than ordered, to be honest.

Now, maybe this is a function of the language and the complexity of the environment, I don’t know. But I would welcome any alternative recommendations if anyone has some.