Reviving this blog … with a question about binary trees


Work changes and a determination to actually finish my PhD mean I really should make a bit of an effort here and so I will.

Here is a puzzle that has been bothering me about binary trees which has come from my PhD research…

In that research I am investigating how to implement virtual memory on a many-core Network-on-Chip (NoC) system. Essentially I have been building and running emulators.

The core of the emulation is the “memory tree” – the connection between the cores of the NoC and the global memory controller. The “Bluetree” memory tree is a binary tree arrangement, with the root node (level 0) connecting to the memory controller and the leaves at the NoC tiles (with each tile having a single processor).

At every non-leaf level of the tree there are multiplexors (mux) with a buffer that can hold a single memory packet – so if two requests for global memory arrive at once and the buffer is free there needs to be an arbitration process to decide which packet gets to occupy the buffer and which has to wait.

We have 128 leaf nodes – as in this tree…

Binary tree with 128 leaf nodes

With this many leaf nodes, the question of the arbitration policy of the muxes is important. A naïve approach would be to say, if two packets contest for the buffer, let the packet on the left win the first time and then pass the packet on the right: after all this guarantees no starvation.

But it is actually a hopeless policy … the leftmost leaf node (shall number from zero) faces no opposition, the rightmost (number 127) loses out to every other node.

But number of leaf node alone is not a sufficient way of describing the relative number of blocks – for instance leaf node 16 appears to be in a better place than leaf node 15 – as going up the tree leaf node 15 can be blocked at level 6 (the root node is at level 0), at level 5, level 4, level 3 and level 2: while leaf node 16 is only blocked at level 1.

Practical testing of this naïvely fair approach in the emulator gives results like the following… as can be seen the right hand part of the tree performs appallingly but even there lots of variation can be seen:

some test results

My question is this: what is the correct (i.e., correct mathematically) way of describing the order of the leaf nodes?

From “The Foundations of Mathematics”


This is a fragment of a question from the first chapter of “The Foundations of Mathematics” (Amazon link):

Record whether you think the following statements are true or false:

(a) All of the numbers 2, 5, 17, 53, 97 are prime.

(b) Each of the numbers 2, 5, 17, 53, 97 is prime.

(c) Some of the numbers 2, 5, 17, 53, 97 are prime.

(d) Some of the numbers 2, 5, 17, 53, 97 are even.

 

For what-it’s-worth I thought all of these statements are true.

Could someone explain this contradiction to me?


Reading on with Julian Havil’s Gamma: Exploring Euler’s Constant and inspired by his discussion of the harmonic series, I come across this:

\frac{1}{1-e^x} = 1 + e^x + e^{2x} + e^{3x} + ...

Havil calls this a “non-legitimate binomial expansion” and it seems to me it can be generalised:

(1 - r^x)^{-1}= 1 + r^x + r^{2x} + r^{3x} + ...

as 1 = (1 - r^x)(1 + r^x +r^{2x}+r^{3x}+... )= 1 + r^x +r^{2x}+r^{3x}+...-r^x-r^{2x}-r^{3x}-...

And, indeed if we take x=-1, r=2 we get:

\frac{1}{1-2^{-1}} = 2 = 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} +... at the limit.

But if we have x \geq 0 it is divergent and the identity, which seems algebraically sound to me, breaks down. E.g., r=2, x=2:

\frac{1}{1-4} = -\frac{1}{3} = 1 + 4 + 8 + 16 + ...

So what is the flaw in my logic?

In Pursuit of the Traveling Salesman


In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation

English: A representation of the relation amon...
English: A representation of the relation among complexity classes, which are subsets of each other. (Photo credit: Wikipedia)

This is a strange little book – and while I am happy to recommend it, I am not really clear in my own mind what sort of a book it is meant to be.

A popular description of a famous problem in the “NP” space and a gentle introduction to the whole issue of computational complexity and complexity classes? Not really, it assumes a bit too much knowledge for that.

So, a canter round the basic maths of the TSP and complexity? Not really that either, as it is light on mathematical specifics.

Still, it’s a decent book.

Was new maths really such a disaster?


English: Freeman Dyson
English: Freeman Dyson (Photo credit: Wikipedia)

On holiday now, so I fill my time – as you do – by reading books on maths.

One of these is Julian Havil’s Gamma: Exploring Euler’s Constant.

The book begins with a foreword by Freeman Dyson, in which he discusses the general failure, as he sees it (and I’d be inclined to agree), of mathematical education. He first dismisses learning by rote and then condemns “New Mathematics” as – despite its efforts to break free of the failures of learning by rote – an even greater disaster.

Now, I was taught a “new maths” curriculum up to the age of 16 and I wonder if it really was such a disaster. The one thing I can say is that it didn’t capture the beauty of maths in the way that the ‘A’ level (16 – 18) curriculum came close to doing. At times I really wondered what was it all about – at the age of 15 matrix maths seems excessively abstract.

But many years later I can see what all that was about and do think that my new maths education has given me quite a strong grounding in a lot of the fundamentals of computing and the applied maths that involves.

To the extent that this blog does have an audience I know that it is read by those with an interest in maths and computing and I would really welcome views on the strengths and weaknesses of the new maths approach.

A better demonstration of the product rule


Inspired by The Theoretical Minimum: What You Need to Know to Start Doing Physics: here’s a better proof/justification for the product rule in differential calculus than the one I set out here last month.

We will start with what we will treat as an axiomatic definition of the differential of the function y=f(x):

\frac{dy}{dx} = \frac{df(x)}{dx} = \frac{f(x+\Delta x) - f(x)}{\Delta x} as \Delta x \rightarrow 0

In this case we have y=f(x)g(x), so \frac{dy}{dx} = \frac{f(x + \Delta x)g(x +\Delta x) - f(x)g(x)}{\Delta x}

From our definition we can substitute for f(x+\Delta x) and g(x + \Delta x) and simplifying our notation for presentational reasons so that \frac{df(x)}{dx} = f^{\prime} etc:

f(x+\Delta x) = f^{\prime}\Delta x + f(x)

g(x+\Delta x) = g^{\prime}\Delta x + g(x)

Giving (after dividing through by \Delta x ):

y^{\prime} =f^{\prime}g^{\prime}\Delta x + g(x)f^{\prime} + \frac{f(x)g(x)}{\Delta x} + g^{\prime}f(x) - \frac{f(x)g(x)}{\Delta x}

=f^{\prime}g^{\prime}\Delta x + g(x)f^{\prime} +g^{\prime}f(x)

As \Delta x \rightarrow 0 the first term falls to zero and so we are left with:

y^{\prime}=f^{\prime}g(x) + g^{\prime}f(x)

Which, of course, is the product rule.

Update: See this most excellent comment from Professor Rubin.

Enhanced by Zemanta