Summing an infinite series

I dare say what I am about to write is no great revelation and indeed I was probably taught this for ‘A’ level maths, but forgot it…

Anyway, I was reading about the Fast Fourier Transform (FFT), which, discovered in the 1960s, radically improved the speed of signal analysis. Older algorithms had been able to analyse N signal samples in O(N^2) time, while the FFT manages it in O(Nlog_2N) time.

It does this by relying on the fact that one can combine two N samples to get a 2N sample in N operations, ie.,

N \rightarrow 2N in N operations
2N \rightarrow 4N in 2N operations
4N \rightarrow 8N in 4N operations

Hence we can get N samples in log_2N doublings. But how many operations does that take?

FFT of Mona Lisa
FFT of Mona Lisa. Public domain image

For the above example (N = 8 ) we can see it is \frac{N}{2} + \frac{N}{4} + \frac{N}{8} and for the more general case we can see this is N(2^{-1} + 2^{-2} + ... )N times an infinite series.

For the overall complexity of the algorithm to be O(Nlog_2N) this series should add to 1. Does it?

Yes, it does, and here’s a proof…

We can rewrite our series as…

X = r + r^2 + r^3 + r^4 + ...

Then

Y = \frac{X}{r} = 1 + r + r^2 + r^3 + ...  = 1 + X

Hence \frac{X}{r} = 1 + X

And \frac{X}{r} - X = 1 .

As we know r = \frac{1}{2} we get 2X - X = 1 so X = 1