Considering the chances of a decoding error (ie having more errors than our error correction code can handle)…

where *p* is the probability of a bit flip and* n* the length of the code.

So in our case that gives

But we can also work out the possibility of *k * bit flips, using the binomial distribution:

So what are the prospects of a decoding error. This is the lower bound (only the lower bound because – as the table in the previous post showed – some errors might be detected and some not for a given Hamming distance):

=

For us , so therefore as , the lower bound in our case is which isn’t bad even for such a noisy channel.

But what is the guaranteed success rate?

Here we are looking at:

(Recalling for *v* bits of error correction)

In our case this gives:

This shows the power of error correction – even though there is a 10% chance of an individual bit flipping, we can actually keep the error down to just over 5%.

###### Related articles

- The difference between Hamming distance and arithmetic difference (cartesianproduct.wordpress.com)
- The elusive capacity of networks (eurekalert.org)
- The elusive capacity of data networks (phys.org)
- “Inexact” chips save power by fudging the maths (pcpro.co.uk)