The cost of soft/minor faults


Here is a complete graph of the memory use and fault count for a run of ls -lia.

ls -lia memory useFault count for ls -liaAs you can see there are only soft/minor faults here – as one would expect (this was a machine with a lot of memory), as the C library provides the ls function and it will be loaded in memory (presumably the filesystem information was also in memory).

But there are a lot of soft faults – and these too have a cost, even if nothing like the cost of a hard fault. For a start each soft page fault almost certainly indicates a miss in the processor cache.

The paper linked here also gives a lot of information about Linux’s generation of soft/minor faults and their performance impact – it seems that the kernel is designed to deliver system wide flexibility at the expense of performance.

Hard faults and soft faults in the real world


I ran Audacity under valext and here is the graph of real memory use:

Real memory use of audacity

And here is the soft and hard fault count:

Audacity fault count

My surmise as to what you can see here? Lots of initialising – with memory use shooting up and down – though the low level of hard faults suggests much of this is from libraries already loaded in the system. Then pages getting swapped out as nothing happens – audacity did not actually display a window – not sure why – I killed it after it had been running for about two and half minutes of virtual time (around 24 hours of wall clock time) as that was more than enough time to produce something on screen!

Still, as a first test of the tool, that was not bad.