Britain used to lead the world in the deployment of a cutting edge technology: the fax machine.
Back when I graduated from university in the last 1980s fax machines were the technology every business had to have and Britain had more of them per head of population than anywhere else.
Today I work in an office where we have decided having one is no longer necessary.
Why all this? Well, one thing I remember from the era of the fax machine is the frequent reference to “Huffman coding“. Indeed, back in the days when free software graphics libraries were in short supply, I investigated whether I could convert faxes into over graphics formats and kept coming across the term.
Here’s my attempt to explain it…
We have a symbol sequence , these have a probability of occurring of , where . We rearrange the symbols so that
Now take the tow least probable symbols (or if there are more than two, any two of the least probable symbols) and connote them with one of the symbols picked to produce a new symbol sequence with members e.g. : with probabilities .
This process can then be repeated until we have simply one symbol with probability 1.
To encode this we can now ‘go backwards’ to produce an optimal code.
We start by assigning the empty symbol , then passing back up the tree, expanding the combined symbols and adding a ‘1’ to the ‘left’ hand symbol and a ‘0’ to the right (or vice versa): 1 = 1, 0 = 0.
Here’s a worked example: we have the (binary) symbols : 0, 1, 10, 11, which have the probabilities of 0.5, 0.25, 0.125 and 0.125.
So = 0, 1, 10 with probabilities 0.5, 0.25, 0.25, = 0, 1 with probabilities 0.5, 0.5 and = 0 with probability 1.
Now, to encode:
, 0 = 1, 1 = 0
0 = 1, 1 = 01, 10 = 00
0 = 1, 1 = 01, 10 = 001, 11 = 000
This code has an average length of
(In fact pure Huffman coding is not generally used – as other forms e.g. adaptive Huffman, offer better performance, though the underlying principles are the same.)