Here is a complete graph of the memory use and fault count for a run of ls -lia
.
As you can see there are only soft/minor faults here – as one would expect (this was a machine with a lot of memory), as the C library provides the
ls
function and it will be loaded in memory (presumably the filesystem information was also in memory).
But there are a lot of soft faults – and these too have a cost, even if nothing like the cost of a hard fault. For a start each soft page fault almost certainly indicates a miss in the processor cache.
The paper linked here also gives a lot of information about Linux’s generation of soft/minor faults and their performance impact – it seems that the kernel is designed to deliver system wide flexibility at the expense of performance.
Related articles
- Hard faults and soft faults in the real world (cartesianproduct.wordpress.com)
- Profile your applications using the Linux perf tools (baptiste-wicht.com)
- Hello from a libc-free world (blogs.oracle.com)
- Using the newLISP FFI | Artful Code (artfulcode.net)
- Comments about the $200,000 BlueHat prize (erratasec.blogspot.com)
- Library Interface Versioning in Solaris and Linux (usenix.org)
- Understanding Virtual Memory (ualberta.ca)
- Tiago Hillebrandt: Installing Kernel 3.0 via PPA on Ubuntu “11.04″ Natty Narwhal (tiagohillebrandt.eti.br)
- What is Slackware Linux? (diwt.wordpress.com)
- Advanced Linux Memory Allocation (linuxjournal.com)