More on error correction

Standard

Considering the chances of a decoding error (ie having more errors than our error correction code can handle)…

P(no errors) = (1 -p)^n where p is the probability of a bit flip and n the length of the code.

So in our case that gives P(no errors) = (0.9)^4 = 0.6561

But we can also work out the possibility of k bit flips, using the binomial distribution:

P(k_{errors}) = _nC_k p^k (1-p)^{n-k}

So what are the prospects of a decoding error. This is the lower bound (only the lower bound because – as the table in the previous post showed – some errors might be detected and some not for a given Hamming distance):

P(total errors) = \sum_{k = d}^n _nC_kp^k(1-p)^{n-k}

For us d=4, n=4, so therefore as 0! = 1, the lower bound in our case is 0.1^4 (0.9)^{4 - 4} = 0.0001 which isn’t bad even for such a noisy channel.

But what is the guaranteed success rate?

Here we are looking at:

\sum_{k=0}^{\lfloor\frac{d - 1}{2}\rfloor} _nC_k p^k (1 - p)^{n - k}

(Recalling d \geq 2v +1 for v bits of error correction)

In our case this gives:

_4C_0 p^0 (1 -p)^4 + _4C_1 p^1 (1 -p)^3 = 0.6561 + 0.2916 = 0.9477

This shows the power of error correction – even though there is a 10% chance of an individual bit flipping, we can actually keep the error down to just over 5%.