To further improve my understanding of probabilities I am reading Data Analysis: A Bayesian Tutorial – which I thoroughly recommend for anyone with basic mathematical knowledge but wanting to grasp the key concepts in this area.

But I have a query about one of the concepts it introduces: a Taylor expansion of a probability density function around its maximum to give a confidence interval (this stuff matters for the day job as well as the computer science).

We have a probability density function (in this context this is a posterior pdf from Bayes‘s equation, but that is not important, as far as I can tell, to the maths).

The maximum of and .

To get a smoother function we look at . As logarithms are a strictly monotonic function .

So expanding , according to the book gives this:

(we ignore higher terms).

I am not sure why the derivatives are the coefficients of the expansion, but I can read up on that later, but given that I understand why there is no term as .

OK … well this is the power of blogging as a means of clarifying thought: because just as I was about to ask my question – why isn’t the first term dependent on – I realised the answer. The first term is, in fact the *zeroth* term of the expansion and so the dependency on is in fact a dependency on .

###### Related articles

- Taylor Expansion (honglangwang.wordpress.com)
- emacs-fu (emacs-fu.blogspot.com)
- Googling Bayes’ pictures (r-bloggers.com)
- The Simpler Derivation of Logistic Regression (win-vector.com)
- Willie Sutton and the multivariate normal distribution (johndcook.com)
- Importance of nonparametric statistics in regression. (maikolsolis.wordpress.com)
- What Is A True Model? What Makes A Good One?: Part IV (wmbriggs.com)
- understanding computational Bayesian statistics: a reply from Bill Bolstad (r-bloggers.com)

## 2 thoughts on “Taylor expansion of a probability density function”

Comments are closed.