Perl: I still love it


For the first time in a few years I am mucking about with Perl scripts.

I know the language is in decline and Perl 6 has become the ultimate in vapourware, but I have to report I still love it.

 

Advertisements

Similarity, difference and compression


Perl
Perl (Photo credit: Wikipedia)

I am in York this week, being a student and preparing for the literature review seminar I am due to give on Friday – the first staging post on the PhD route, at which I have to persuade the department I have been serious about reading around my subject.

Today I went to a departmental seminar, presented by Professor Ulrike Hahne of Birkbeck College (and latterly of Cardiff University). She spoke on the nature of “similarity” – as is the nature of these things it was a quick rattle through a complex subject and if the summary that follows is inaccurate, then I am to blame and not Professor Hahne.

Professor Hahne is a psychologist but she has worked with computer scientists and so her seminar did cut into computer science issues. She began by stating that it was fair to say that all things are equally the same (or different) – in the sense that one can find an infinite number of things by which two things can be categorised in the same way (object A is weighs less that 1kg, object B weighs less than 1kg, they both weigh less than 2kgs and so on). I am not sure I accept this argument in its entirity – in what way is an object different from itself? But that’s a side issue, because her real point was that similarity and difference is a product of human cognition, which I can broadly accept.

So how do we measure similarity and difference? Well the “simplest” way is to measure the “distance” between two stimuli in the standard geometric way – this is how we measure the difference between colours in a colour space (about which more later) ie., the root of the sum of the squares of the distances. This concept has even been developed into the “universal law of generalisation”. This idea has achieved much but has major deficiencies.

Professor Hahne outlined some of the alternatives before describing her interest (and defence of) the idea that the key to difference was the number of mental transformations required to change one thing from another – for instance, how different is a square from a triangle? Two transformations are required, first to think of the triangle and then to replace the square with the triangle and so on.

In a more sophisticated way, the issue is the Kolmogorov complexity of the transformation. The shorter the program we can write to make the transformation, the more similar the objects are.

This, it strikes me, has an important application in computer science, or it least it could have. To go back to the colour space issue again – when I wrote the Perl module Image::Pngslimmer I had to write a lot of code that computed geometrical distances between colour points – a task that Perl is very poor at, maths is slow there. This was to implement the so-called “median cut” algorithm (pleased to say that the Wikipedia article on the median cut algorithm cites my code as an example, and it wasn’t even me who edited it to that, at least as far as I can remember!) where colours are quantised to those at the centre of “median cut boxes” in the colour space. Perhaps there is a much simpler way to make this transformation and so more quickly compress the PNG?

I asked Professor Hahne about this and she confirmed that her collaborator Professor Nick Chater of Warwick University is interested in this very question. When I have got this week out the way I may have a look at his published papers and see if there is anything interesting there.

 

Betamax versus VHS in your browser


Everyone knows the story, even if, unlike me, they are not old enough to remember it: the video format VHS overcame the superior Betamax format to dominate the Mitt Romney laughing GIFhome video market in the 80s and 90s.

Of course, the claims that Beta was superior are rather tendentious but the fact that video producers stuck with a Beta format for their own tapes long after the rest of us switched to a VHS monoculture must say something.

Now, inside your browser, the same thing has happened. As is noted in today’s Guardian, the 1987-vintage GIF format refuses to die. These days you see far fewer static GIFs than even a few years ago (though they are still out there in large numbers) – JPEG and PNG dominate. But you’ll have to look very hard for an MNG (the ‘official’ PNG analogue of the animated GIF) or even APNGs, the unofficial but more widely used attempt to animate PNGs.

A few years ago I wrote a Perl PNG module – Image::Pngslimmer – which replicated many of the functions of the C libpng library, so I could use some of that in CGI code without having to switch from Perl to C and back again. Then – this was 2006 or so – PNG support was quite weak in browsers and GIFs were far more plentiful.

PNG is a superior format to GIF (especially for line drawings and similar – for general photographs JPEG is the superior choice, unless you truly demand a lossless format) and it is a good thing that it has edged GIF out in many places. But it seems we will be stuck with GIFs for many years yet.

Making sense of Android’s complex development process


Image representing Android as depicted in Crun...
Image via CrunchBase

Back in about 1997 I bought a book about this new programming environment – it seemed something bigger than a language but smaller than an operating system – called Java.

Back then the idea seemed great – write once, run anywhere – but there was a lot of scepticism and, of course, Microsoft tried to poison the well through the tactics of “embrace and extend” with their J++ offering. All of that made it look as though Java was going nowhere.

I wrote a couple of applets – one was a “countdown” timer for Trevor Philips‘s mayoral election website in 1999, another was a SAX based parser for the largely Perl-based content management system I wrote for the Scottish Labour Party the following year, ahead of the 2001 election. But no one seemed to like applets much – it seems ridiculous now, but the 90K download needed for the SAX parser really slowed down the Scottish party’s site, even though I was pretty proud of the little newsticker it delivered (along with annoying teletype noises as it went). I forgot about Java.

But, of course, that was wrong. Java is programming language du jour these days, though Microsoft’s responses to the success of Java and the failure of J++, C# and .net, are also big.

Android is, of course, Java’s most prominent offer these days – literally millions of people will be running Android apps even as I write this and thousands of Android phones are being bought across the world daily. Time to get reacquainted, especially as my new job is once more about political communications.

But, as I discovered with C++ when I came back to it after over a decade for my MSc, Java has moved on a fair bit in that time and, unlike C++, I cannot say all the progress seems to be positive. Indeed Java seems to thrive on a particularly ugly idiom with developers being encouraged to write constructors of anonymous classes in the headers of functions – ugh.

I can certainly see the beauty of Groovy more clearly than ever, too. Though being an old time Perl hacker makes me resent Java’s heavy duty static typing in any case.

To help me through all this I have been reading O’Reilly‘s Programming Android: Java Programming for the New Generation of Mobile Devices. Now, usually O’Reilly’s books are all but guaranteed to be the best or close to the best of any on offer, but I have my doubts that is the case with this one – it seems to be sloppily edited (eg at different times it is difficult to follow whether one is being advised to use the Android SDK or the Eclipse editor) and falls between being a comprehensive introduction to Android programming and a guide for Java hackers to get on with it. It feels less than ordered, to be honest.

Now, maybe this is a function of the language and the complexity of the environment, I don’t know. But I would welcome any alternative recommendations if anyone has some.

Writing more code to avoid writing any of the report?


The C Programming Language
Image by mrbill via Flickr

I have managed to churn out 160 lines of working C today – which I think is quite good going, though, according to this, maybe I could have churned out 400 of C++ or even 960 of Perl (I love Perl but the mind boggles).

My program will tell you how pages pages are present or how many have been swapped (it has the ability to do a bit more too, but I have not exploited that even though it is essentially there) – you can fetch it from github here: valext (the name reflects the idea that this was going to be an extension to Valgrind but then I discovered/realised that was never going to work).

Anyway, what I now have to face up to is whether I am churning out code to avoid actually writing any of my MSc project report – I have a feeling that some of that is going on and think I need to start rereading Writing for Computer Science – which helped me greatly with the proposal.

“Analyzing Computer System Performance with Perl::PDQ”


Kernel panic
Image via Wikipedia

My MSc project is ambitious, sometimes it seems too ambitious: to test various ways in which the Linux VMM and scheduler could be modified to improve performance under high levels of stress.

Books are an essential guide in computer science, perhaps the essential guide, and having a book that helped me get to grips with issues such as measuring the Linux kernel’s use of the VM subsystem and how to design tests of application behaviour and page faulting rates and so on is increasingly important.

So, is Analyzing Computer System Performance with Perl::PDQ any good (found it through trawling round Amazon, but please send me any links to anything else from scientific paper upwards)? Anybody got some alternative suggestions?

Smaller PNG graphic files from your LaTeX


I use LaTeX (LyX as the front end) to generate the occasional bits of mathematical script that appear here. Is it bad to admit it gives me a small thrill to see how the LaTex is transformed into something mathematical – makes me feel like a proper scientist/mathematician. Well, I’ve done it now…

WordPress.com will not display postscript natively (or if it can I have never been able to get it to work), so I usually open the .ps files in GIMP and convert them to a PNG – PNGs being optimised to handle things like line drawings and graphical images of text.

The PNGs that come out of this are of a reasonable size but they could be smaller. Some years ago I wrote some Perl code to do that (I needed something that worked in a CGI environment and while Perl is not so fast for even integer maths it was the best option).

That code is available on CPAN as Image::Pngslimmer and on a typical Linux/BSD (OSX too?) install (I am assuming you have perl and the cpan module installed) you should be able to get it on to your box with sudo cpan Image::Pngslimmer – which may ask you a lot of questions before it installs if you have never used CPAN but while YMMV the default answers should probably work.

You can write your own script or just use this quick and dirty one for PNGs of 64K or less:

#!/usr/bin/perl -w
use Image::Pngslimmer();
my $srcfile = shift;
open INF, $srcfile;
binmode INF;
read(INF, $buffer, 65535);
my $blob2 = Image::Pngslimmer::indexcolours($buffer);
my $blob3 = Image::Pngslimmer::discard_noncritical($blob2);
print Image::Pngslimmer::zlibshrink($blob3);

To use this copy the script above into your editor and save it with a name eg shrinker.pl and then execute like this: perl shrinker.pl some.png > smaller.png which will create a new file smaller.png from some.png (which is untouched).

The results can be reasonably impressive. This file – created by cutting from the GIMP – comes in at 14.4KB (NB the graphics you see here are preprocessed by wordpress.com, click on the graphics to see the original):

Proof graphic, uncompressedBut after a bit of processing with the above script, this comes in at 9.7KB:

Compressed proof graphicFinally, please do not say this doesn’t matter because your pipe is so fat. Millions of these things are being pumped through the internet every minute and the collective saving of bandwidth really would matter if we could deliver it…

530 lines of Javascript


I have just written that amount of code in what I persist in thinking of as a toy language (it was actually somewhat longer until I refactored the code to group some common functions together),

I had to do this for a coursework exercise – a lot of effort to be honest for what is at stake – processing a rather large XML file with some XSL. The Javascript essentially manages the parameters.

At the end my conclusion is that I don’t really see why anybody would want to write that much client code if they could possibly help it. Of course it transfers the computational burden to the client – but at the cost of hundreds of lines of interpreted code which is essentially under the control of the people who write the engines in the Firefox and IE browsers. In the real world that points towards a support nightmare.

Having written a fair bit of Perl (and AJAX) stuff in the past the whole thing felt unnatural – dozens of lines, much of it designed to handle the differences between the browser engines, that could have been handled simply on the server side.

One thing that I was convinced of was the potential power of XSLT: though I was not quite prepared for the revelation that it is Turing complete (ie it would be possible to write some XSL that would process any algorithm/task solvable through a finite number of mechanical steps). Though I shudder to think of how big a stylesheet would be required to handle all but the smallest of task.

But the potential power of XSLT is not the same of thinking of many practical uses for it!

Some advice you should ignore


I have an interest in graphic formats – I wrote the perl package Image::Pngslimmer a while back when I was hacking some perl database code that delivered (via Ajax) graphs and photographs.

I built the whole website for fun and learning purposes and so therefore used PNG graphics when JPEGs would have (for the photographs at least) been more appropriate.

So, therefore an article that begins:

Virtually every developer will use bitmaps at times in their programming. Or if not in their programming, then in a website, blog, or family photos. Yet many of us don’t know the trade-offs between a GIF, JPEG, or PNG file – and there are some major differences there. This is a short post on the basics which will be sufficient for most, and a good start for the rest. Most of this I learned as a game developer (inc. Enemy Nations) where you do need a deep understanding of graphics.

has my attention.

But what a load of rubbish it turns out to be. The author plainly doesn’t know what the “trade-offs” are either as he states:

PNG – 32-bit (or less), lossless compression, small file sizes – what’s not to like. Older versions of some browsers (like Internet Explorer) would display the transparent pixels with an off-white color but the newer versions handle it properly. Use this (in 32-bit mode using 8 bits for transparency) for everything.

…and…

JPEG – 24-bit only (i.e. no transparency), lossy (can be lossless but compressions drops a lot), small file sizes. There is no reason to use this format unless you have significant numbers of people using old browsers. It’s not a bad format, but it is inferior to PNG with no advantages.

This is seriously bad advice and it is also technically wrong.

First, PNG is not a lossless format. Many PNGs are indeed lossless, but they do not have to be.  A substantial part of the  code in the Image::Pngslimmer package is about using the median cut algorithm to produce a lossy PNG out of a lossless one by producing a best match set of colours for the image and then converting pixels to the colour in the set which is closest to them in RGB space (the median cut algorithm ensures that there are more members of the set in the bits of RGB space where there are more colours in the original image).

Second, JPEG remains by far the best way of representing photographs in the real world internet, where the trade-off between size and quality is important.  Look closely at the colour boundaries in a JPEG, especially a highly compressed one, and you can see how the edges are blurred and colours cross the boundaries – that is an artefact of the compression and indeed represents a loss (once compressed in this way the quality of the original image cannot be recovered). But it is also a highly efficient algorithm and can produce much smaller image files than the lossless PNG equivalent without any noticeable dimunition in quality for the average viewer.

As a parallel think of MP3s and FLACs for audio. Like FLACs, PNGs (can be) lossless and compressed – so delivering a much smaller file than the source WAV or similar file. But MP3s can be much smaller again without any noticeable dimunition in quality for most people on most equipment.

Yes, more and more people are connecting at 25Mb/s, but then more and more people are also connecting using GPRS, EDGE and low bandwidth 3G connections. Size still matters.

PNG is absolutely superior for representing line drawings (including artefacts such as graphs) as no one wants to see the lines start to fuzz. But that is a pretty limited subset of images on the internet.

So here is some proper advice: use JPEGs (at about 85% compression) for almost all photographs you want to put on the internet. If you have a line drawing or similar, prefer PNGs. If you have a truly lossless image (remember someone may have compressed it before it gets to you) and you need to keep it that way, then use the PNG format: but expect your file size to be very large.