A very first draft…


Our proposal here is to meet this need to use a memory space much greater than the locally available low-latency storage head on. The problems faced by programmers of today’s many- and multi-core devices are, in a sense, far from new: in the 1960s an earlier generation of programmers, experimenting with virtual memory and time-sharing computers, and using programs generated by the first generation of high-level languages were struck down by the phenomenon of thrashing, where computing time was lost to excessive memory management [23]. The solutions we propose here draw on the lessons learned at that time, but significantly adapt the methods to account for the realities of using multicore devices with very limited local storage.

We propose to enable virtual memory [24] for such systems, using a novel form of paging which, instead of transferring whole pages across the memory hierarchy, concentrates on a minimal transfer of partial pages [50]. Here hard faults still impose a significant penalty of lost cycles but the cost of paging in and out of memory is amortised across multiple memory reads and writes. This avoids unnecessary transfer and limits congestion in the memory interconnect, which we find to be a significant determinant of program performance.

One of the lessons of the 1960s was that maintaining a program’s working set in low latency memory was the most effective way of limiting paging and thrashing [22]. Much research at that time and into the 1970s and later concentrated on effective ways to maintain the working set of pages in fast memory and innovation continued into the 1990s (and beyond), for instance with the 2Q algorithm in Linux [38]. Here, though, we establish that better performance in embedded and real-time manycore systems is unlikely to come from a better page replacement algorithm but from a reorientation towards better management of the internals of the pages.

General computing page replacement algorithms are generally robust [49] and with the decreasing cost of memory, much recent research has concentrated on using large page sizes, not least to minimise TLB misses [9], we return to earlier research findings which emphasise the efficiency of small pages on small systems and we show that combining this with a FIFO page replacement policy may, perhaps counter-intuitively, deliver more time predictable or even faster performance than attempts to use some version of a least-recently used policy, though we also show that choice of page replacement algorithm is likely to be strongly related to the variation in working set size over a program’s lifecycle.

Concerns about power have always been important for embedded devices, but changing characteristics of hardware mean new issues such as “dark silicon” [28] will be important as we move deeper into the coming manycore era. We demonstrate here that constraints in power may be, to some extent, mitigated by a loosening of bottlenecks elsewhere in a realtime system, particularly in the memory interconnect: in a system where power is no constraint then congestion here dominates performance, where power concerns limit processor activities they may also lessen congestion.

Further we demonstrate results that some frequency scaling in manycore systems may not limit performance if it contributes to lessening congestion in the memory interconnect: results that suggest that the dark silicon issue may not be as big a barrier to performance in real-world applications as might be expected.

Computer scientists’ lousy citation style


I am reading this book: Soft Real-Time Systems: Predictability vs. Efficiency, and I am struck, once again, by the truly lousy style of publication reference that seem to be preferred by so many

Journal of the American Mathematical Society
Image via Wikipedia
computer scientists,

The style used in the book appears to be that favoured by the American Mathematical Society – the so-called “authorship trigraph” – with references made up of letters from the author’s name followed by the last two figures of the year of original publication eg., [Bak91] which references in the bibliography:

[Bak91]        T.P. Baker. Stack-based scheduling of real-time processes. Journal of Real Time Systems, 3, 1991.

Now it is possible, if I were an expert in the field that I might recognise this reference, but it is far from helpful. When referencing papers written by multiple authors the system is hopeless – using the first letters of the first three authors and ignoring the rest, eg., [DGK^+02] is a real reference in the book to a paper with eight authors. I really doubt many people would get that straight away.

But at least this reference system contains slightly more than the IEEE‘s citation system, which demands papers are merely referenced by a bracketed number in the text, eg., [1].

These reference systems are so widely used that I worried that my own use of the Chicago system – which specifies author and year, eg., (Denning, 1970), would be frowned upon in Birkbeck – but a re read of the regulations showed their demand was for a consistent and well-recognised system.

The ACM, too, promote a sensible citation format eg., [Denning 1970].

Does this matter? Yes. I am sure many readers of these books and papers are students who are conducting literature reviews or similar exercises. Reading the original reference may often be important and having to flick back and forth to a bibliography to check the meaning of an incomprehensible reference is not calaculated to add to the sum of human happiness.

(I don’t have any real complaints about the book though – except that the translation is plainly a bit stilted – for instance, the first sentence of the book refers to real time systems being investigated “in the last years” – a fairly common mistake in syntax from non-English speakers and one that the editors really ought to have corrected. But the occasional infelicity of language does not detract from the book’s overall appeal.)