Taylor expansion of a probability density function

Standard

To further improve my understanding of probabilities I am reading Data Analysis: A Bayesian Tutorial – which I thoroughly recommend for anyone with basic mathematical knowledge but wanting to grasp the key concepts in this area.

But I have a query about one of the concepts it introduces: a Taylor expansion of a probability density function around its maximum to give a confidence interval (this stuff matters for the day job as well as the computer science).

We have a probability density function P(x) (in this context this is a posterior pdf from Bayes‘s equation, but that is not important, as far as I can tell, to the maths).

The maximum of P(x) = P(x_0) and \frac{d}{dx}P(x_0) = 0.

To get a smoother function we look at log_e(P) = L. As logarithms are a strictly monotonic function \frac{d}{dx}L(x_0) = 0.

So expanding L, according to the book gives this:

L = L(x_0) + \frac{1}{2}\frac{d^2L}{dx^2}_{x_0}(x - x_0)^2 + \ldots + (we ignore higher terms).

I am not sure why the derivatives are the coefficients of the expansion, but I can read up on that later, but given that I understand why there is no \frac{dL}{dx} term as \frac{dL}{dx}_{x_0} = 0.

OK … well this is the power of blogging as a means of clarifying thought: because just as I was about to ask my question – why isn’t the first term dependent on (x - x_0) – I realised the answer. The first term is, in fact the zeroth term of the expansion and so the dependency on (x - x_0) is in fact a dependency on (x - x_0)^0 = 1.

2 thoughts on “Taylor expansion of a probability density function

  1. Pingback: Online Statistics Course - Probability Density Functions

  2. Pingback: How fair is my dime, part 2 | cartesian product

Comments are closed.