How much memory do you need?


What’s the best way to speed up your computing experience? As many know the most cost-effective way is often not to buy a new computer with a faster processor but to add more memory to an existing system.

The plot below, based on some results from my PhD research shows just how this works…

timingsIn my case I am measuring how long it takes a processor of a fixed speed to execute a given program while I vary the amount of memory available.

My research centres on “network-on-chip” systems so the issue here is how much memory is available locally (i.e., on the chip). Because if instructions and data are not locally available they have to be fetched in from some more “distant” store (whether that is system global memory or even a disk system). And if space on the chip is limited then you generally have to evict some other piece of memory to make way for the piece needed immediately. Yet if you need to evicted piece again in the future you have to take the time needed to fetch it back in again and so on.

In this case we are using 4096 byte (4K) blocks of memory – i.e., 4K pages. So when it says 10 pages on the x-axis that means there is 40K available locally and so on.

I am testing all this on the OVPsim emulator and I have no independent means of accurately timing how long the process takes – but I can count the number of instructions it takes to complete the task.

Factors that affect the number of instructions taken are – in order of likely importance:

  • the time taken to fetch whole pages in and out of memory – a “hard fault” occurs when a page has to be evicted and a new page brought in instead (the earliest hard faults are not as slow as the later ones as until the full number of pages are used no eviction is required);
  • the time it takes to ensure that pages that are in memory but which are not immediately available for execution (because the number of translation mappings that tell the central processor where they are is limited – to 8 in this test case – and so for each fault we first have to test if the page is available but unmapped). If an unmapped page is available we have a “soft fault” which is much quicker to process than a hard fault as no eviction etc is required;
  • picking the next page that will be evicted if have a hard fault – in this case we aim to pick the page that was least recently used, but even then we have to use an approximation. This process is not the same as evicting the page – it merely indicates which one is the best candidate for eviction should that become necessary.

The number of hard faults is reduced by increasing the amount of memory available (until, of course, you have so much memory you never need evict a page). But as you increase the amount of memory you also make checking for soft faults and picking the next candidate for eviction slower – because there are more pages to check.

And this is what we see in the plot. When memory is in short supply adding even a single extra page can make a big difference. There is then a long range where the decrease is not so great but constant. This reflects the fact that memory addresses are not needed at random and that programs show ‘locality of preference’ (ie if you need one address at one moment you are quite likely to need a nearby address at the next). This locality means that adding extra pages can have a limited impact once you have enough to cover the range of likely picks in a reasonable time frame. Then adding extra memory means that the changes between the different phases of locality becomes smoother, so we still see a reduction in time but not anywhere near as pronounced as before.

Then we get to the point – about 30 pages in this case – where we are close to covering the full range of addresses used, even when we switch between phases. In this case we see a sudden fall again until – at about 33 – 35 pages – we seem to have covered every single address our program will ever use.

After that having more memory makes the system slightly slower (NB in most real-world desk and laptop systems adding more memory does not slow down the system – so don’t worry about that!).

The lesson: if your computer is really slow and you don’t want to replace it, add more memory. But if you computer already has so much memory that it doesn’t know what to do with it and it is slow, you have a bigger problem!

Why stop at twelve?


 

There is an election coming in Britain and so politicians are setting out their stalls.

Education Secretary Nicky Morgan has now said that her priority will be a “war on innumeracy” and an insistence that children who leave primary schools should know their “twelve times tables”.

But why 12? Until 1971 British (and Irish) coinage was divided into shillings of 12 pence (and 20 shillings made a pound). Then twelve times tables were of a practical use – and naturally enough that carried on for some time afterwards (I was taught my “times tables” in P3 in 1972/73 and 12 was very much part of the programme).

Latterly though, as I understand it, teaching has concentrated on 1 – 10 for what ought to be obvious reasons. So going back to 12 is plainly nothing more than a conservative politician’s attempt to appeal to nostalgic and reactionary notions that the world was better in the past and everything has gone downhill.

I can see no reason at all why learning 11 and 12 times tables actually improves mathematical understanding, as opposed to gaining an additional, but of limited use, practical skill. And if the idea is to be in the “top 5 for maths” then I’d suggest increasing children’s understanding of maths matters a bit more than their ability to know from memory what 11 x 12 is.

Of course, before someone objects, knowing your 12 times tables is useful: but then so is knowing the 13 times tables or the 60 times tables – so why stop at 12?

Indeed, in this age, if we are going to stop at somewhere above 10, shouldn’t it be a power of 2? Knowing the 16 times tables would be a big advantage for any would-be programmer.

It’s a slightly ridiculous idea to demand that our children are taught the 16 times tables, but no more or less than the 12 times tables – so I am putting it out there!

Squashing the LRU bug


English: Typical actions taken upon a virtual-...
English: Typical actions taken upon a virtual-physical address translation. (Photo credit: Wikipedia)

Just for once I did not rush to an online forum and say I had found a bug in a product – and I was right not too.

Having tried three different cross compiler toolchains I convinced myself that the issue was plainly neither compiler (or, to be more accurate in this case, assembler) output but some process in my code that was causing corruption. And sure enough I found that I was mangling the physical addresses of my page frames.

Thanks to the way OVPsim operates – by default it provides physical mappings on demand for the full 4GB address space of a 32 bit register – this mangling did not generate a page fault, but it did mean that for certain sequences of instructions – particularly those where lots of page faults were likely to occur, memory was being corrupted.

Changing one line of assembly – so that virtual address output was written to virtual address slot and not the physical address slot in my simple list of page table entries fixed that.

So now the code works – at least I think it does!

Curiouser and curiouser – the case of the LRU bug


A 256Kx4 Dynamic RAM chip on an early PC memor...
A 256Kx4 Dynamic RAM chip on an early PC memory card. (Photo by Ian Wilson) (Photo credit: Wikipedia)

My LRU queue bug is continuing to puzzle me – and it’s not as simple as a data misalignment. In fact it does not appear to be a data misalignment issue at all: before I was trapping a lot of hardware exceptions under that header because it was a common fault when I got the code wrong, but a closer examination showed it to be an illegal opcode exception.

How that could be caused by the size of the local memory we were simulating was beyond me – but perhaps some code was being pushed out of alignment and an illegal instruction created, I thought.

But that does not appear to be the issue at all – in fact the really puzzling thing is that the exact same string of opcodes at the same addresses runs without a problem in the version with the functional memory sizes as with the “broken” memory sizes.

The only difference seems to be that when the broken code (ie the setup with the non mod 4 number of 4k memory pages) raises an illegal opcode exception, the good code raises a page fault.

It looks like it might be a bug in the simulator itself – and having written that I am hoping that the bad workman’s curse now befalls me and I quickly find it was all my fault to begin with. But as of now I am drawing a blank.

LRU queue strangeness


Prinzipdarstellung der Arbeitsweise einer MMU
(Photo credit: Wikipedia)

For the last week or so I have been writing and then debugging (and so mainly debugging) a least-recently-used (LRU) page replacement system on my Microblaze simulation.

Perhaps I shouldn’t have bothered – I had a working first-in-first-out (FIFO) system after all. But no one seriously uses FIFO, so I had to write some LRU code.

I thought I had it working tonight – it ran through the target exercise in about 6 million instructions (as the MMU in a Microblaze is crude, memory gets loaded in and out ‘by hand’ and so the instruction count measures time/efficiency) when the system had 32 4k local pages and in about 10.5 million instructions when it had 24 4k pages available locally – all seems fine: less pages means more time is required.

But then things started to get weird – testing the system with 28 pages took about 9 million instructions, but when I tried to use 26 pages I had to break the code after it had executed 14 trillion instructions.

Indeed it seems to only work for a very limited number of page counts. How odd – though a typically debuggers story. A week in, finally thinking I’d cracked it when some really strange behaviour manifests itself.

Update: It appears to be an unaligned data exeception issue. Somewhere along the line a piece of code relies on the LRU queue to be a multiple of 4 in length would be my guess…

There is no such thing as “alternative medicine”


There is no such thing as “alternative medicine” – any more than there is “alternative mathematics”.

Yes, there are different ways to practise medicine, just as there are different ways of calculating the value of pi – but the fundamental remains: something is either medically valid or it is not. If it is not – and that means if it cannot prove its efficacy – then it is not any sort of medicine.

I am move to write this by two things – firstly what appears to be a major outbreak of measles in the United States that has been brought about not by a failure of medicine but by a cold and deliberate attempt to undermine medicine, and secondly by the current vogue in the UK of otherwise rational people saying they would support the Green Party – a party dedicated, amongst other things to the promotion of anti-science in medicine.

The US measles outbreak is not just because of the “irresponsible and dishonest” work of Andrew Wakefield – a man who manipulated research findings in a way that stood to bring substantial financial benefits to lots of people he was associated with. Wakefield has been thoroughly discredited by anyone with the remotest concern for scientific credibility.

What has fuelled the outbreak is the belief by some – too many – that somehow they know better than science that their “alternative” medicine is better. In fact they are putting the lives of their children at risk through their refusal to accept medicine. If you want a contemporary example of “shouting fire in a crowded theatre” then this, surely, is it.

As for the Green Party – which were reported to be as high as 11% in one opinion poll last week and which have also won the right to appear in leaders’ debates on UK television – at least they are no longer explicitly describing homoeopathy (also known as “water”) as “natural medicine”) and they have also seemingly distanced themselves from past demands that scientific research only follow government-approved routes (Lysenko-ism anybody?). But they still state this on their policy website:

Current theory and practice place too much emphasis on interventions at the biochemical and individual levels, too little on the social and ecological.

Of course, perhaps that is just a meaningless left-over from their past days as new age hippies – but it has a distinct anti-science tone to my ears, especially as it is followed up with the statement that their aim is to:

…develop a new public health consciousness, which, through individual and collective action, will challenge vested interests and promote the personal, social and political changes needed to achieve improved states of health.

Wouldn’t actually some new “biochemical interventions” also be a good idea? I certainly think so, but if health is all about lifestyle you might feel otherwise.

Then there is this “aim”:

To develop health services which place as much emphasis on illness prevention, health promotion and the development of individual and community self-reliance as on the treatment and cure of disease. Such services will of necessity be empowering, participatory and democratic and their development will be guided by users’ own perceptions of their health needs.

By now I am beginning to feel distinctly uncomfortable – to me the above statement is seriously anti-scientific in tone and intent: truth, after all, is not democratic.

And, then, under policies we have this:

We will work to reduce the number of interventions in childbirth, and change the culture of the NHS so that birth is treated as a normal and non-medical event, in which mothers are empowered and able to be in control.

(Shades here of the Soviet approach – and the claim that pain was all in the mind of the mother – as described in Red Plenty. Most mothers I know were very glad of medical help at birth, especially in the form of pain relief. Is the Green Party’s policy really to “change the culture” of the NHS to limit access to that?)

And this…

The safety and regulation of medicines will be controled by a single agency. This agency will ensure that medicines meet minimum safety standards, provide clear labelling of both ingredients and side-effects. The agency will cover existing synthetic medicines as well as those considered as natural or alternative medicines.

Considered by whom to be “alternative” medicines? It’s not a purely rhetorical question as the Green’s current sole MP, Caroline Lucas, has a disappointing record as an advocate of “alternative” medicine campaigns funded by commercial interests in this field.

Not fit to hold public office


John Mason, Scottish National Party MSP for Glasgow Shettleston, has (today) tabled the following motion in the Scottish Parliament:

That the Parliament notes that South Lanarkshire Council has issued guidance concerning the appointment and input of chaplains and religious organisations in schools; understands that some people believe that God created the world in six days, some people believe that God created the world over a longer period of time and some people believe that the world came about without anyone creating it; considers that none of these positions can be proved or disproved by science and all are valid beliefs for people to hold, and further considers that children in Scotland’s schools should be aware of all of these different belief systems.

Follow

Get every new post delivered to your Inbox.

Join 1,165 other followers

%d bloggers like this: