Procrastination time: what does this mean?


I am supposed to be writing some more stuff for my PhD thesis, but this feels like a more legitimate way to procrastinate than others…

I have plotted the performance of a model of a multi-core processor system and, because it was something to do, applied a Fourier Transform to the data (the black plots in first chart):

2k LRU performance

So, my question is, what does the second chart mean? If it means anything of course.

I get that the real component is plotted on the x-axis and the imaginary component on the y-axis, but is there any information conveyed in the chart?

Human-powered time travel is real


I have run 1,568,000 metres so far this year, in 556,440 seconds. But thanks to special relativistic time dilation (Lorentz factor is 0.999999999999996) this is, in fact, 556440.000000003926241 seconds of all you non-runners’ time.

So, yes, I have travelled 0.000000003926241 seconds into the future.

How to export from LyX to LaTeX


All of this information is out there, but it took me more than a few hours to successfully manage this on my Mac, so here are the steps, using LyX and TeXShop.

 

  1. Go to File->Export->LyX Archive
    This generates a zipped tarball of your .lyx file and, crucially, all the other elements (such as graphics and bibliography) in a hierarchy of files.
  2. Unroll the tar ball (i.e. tar -xvzf yourarchive.tar.gz) in a suitable place.
    You now have all the files you need in a standalone hierarchy (best to place your tarball in its own directory before you do this as the subdirectories you create could go under your root directory etc)
  3. Convert the LyX file to raw LaTeX.
    At this point all you have a LyX archive and if you really need LaTeX you need to convert the LyX file. On a Mac the easiest way to do this is to point LyX.app at the .lyx file in your archive and open it, then run another export – this time to LaTeX (pdflatex). This will create a .tex file in your archive. (I messed this stage up in the original posting – thanks, as always to Paul Rubin for putting me on the straight and narrow here.)

  4. Make sure your bibliography is in the right place.
    Your bibliography needs to be in the same directory. So if your .tex file is in the bizarro directory, your bib file must be too. (I said before it should have the same name as your .tex file but that is not necessary.) 
  5. Open your .tex file in TeXShop.
    You may need to do some editing, for instance if your .tex file refers to your bibliography  under some path, so for the example above you need to make your bibliography reference look like this:
    \bibliography{bizarrothings}

  6. Run the following sequence of commands in TeXShop:
    Typeset LaTeX
    Typeset BibTeX
    Typeset LaTeX
    Typeset LaTeX

    Hopefully you now have compiled a PDF file with all the correct references, which means if you package up your archive directory it will now contain the correct .tex file (it will also contain the PDF and .lyx file unless you delete them first).

Hope this is helpful – if I have missed anything, let me know in the comments.

 

A very first draft…


Our proposal here is to meet this need to use a memory space much greater than the locally available low-latency storage head on. The problems faced by programmers of today’s many- and multi-core devices are, in a sense, far from new: in the 1960s an earlier generation of programmers, experimenting with virtual memory and time-sharing computers, and using programs generated by the first generation of high-level languages were struck down by the phenomenon of thrashing, where computing time was lost to excessive memory management [23]. The solutions we propose here draw on the lessons learned at that time, but significantly adapt the methods to account for the realities of using multicore devices with very limited local storage.

We propose to enable virtual memory [24] for such systems, using a novel form of paging which, instead of transferring whole pages across the memory hierarchy, concentrates on a minimal transfer of partial pages [50]. Here hard faults still impose a significant penalty of lost cycles but the cost of paging in and out of memory is amortised across multiple memory reads and writes. This avoids unnecessary transfer and limits congestion in the memory interconnect, which we find to be a significant determinant of program performance.

One of the lessons of the 1960s was that maintaining a program’s working set in low latency memory was the most effective way of limiting paging and thrashing [22]. Much research at that time and into the 1970s and later concentrated on effective ways to maintain the working set of pages in fast memory and innovation continued into the 1990s (and beyond), for instance with the 2Q algorithm in Linux [38]. Here, though, we establish that better performance in embedded and real-time manycore systems is unlikely to come from a better page replacement algorithm but from a reorientation towards better management of the internals of the pages.

General computing page replacement algorithms are generally robust [49] and with the decreasing cost of memory, much recent research has concentrated on using large page sizes, not least to minimise TLB misses [9], we return to earlier research findings which emphasise the efficiency of small pages on small systems and we show that combining this with a FIFO page replacement policy may, perhaps counter-intuitively, deliver more time predictable or even faster performance than attempts to use some version of a least-recently used policy, though we also show that choice of page replacement algorithm is likely to be strongly related to the variation in working set size over a program’s lifecycle.

Concerns about power have always been important for embedded devices, but changing characteristics of hardware mean new issues such as “dark silicon” [28] will be important as we move deeper into the coming manycore era. We demonstrate here that constraints in power may be, to some extent, mitigated by a loosening of bottlenecks elsewhere in a realtime system, particularly in the memory interconnect: in a system where power is no constraint then congestion here dominates performance, where power concerns limit processor activities they may also lessen congestion.

Further we demonstrate results that some frequency scaling in manycore systems may not limit performance if it contributes to lessening congestion in the memory interconnect: results that suggest that the dark silicon issue may not be as big a barrier to performance in real-world applications as might be expected.

Getting a job


I have, essentially, two sets of skills and experience.

One is as a political campaigner and communicator. I did well out of that for a while and more than that, did some things I am proud of and feel really privileged to have had a chance to be part of.

But it’s fair to say that road seems to have hit a dead end.  If you want to run a serious, progressive, campaign then I am certainly still interested, but I am not sure there is much of that out there today.

So then there are the other skills – ones that I am told are in high demand.

Namely as a software designer/writer/developer.

I can do this and I am much better these days than I used to be: unlike, say, running I am still getting faster and sharper. C/C++/Embedded/Perl/Linux/Groovy/DSLs/R/Deep Learning – I can tick all those boxes.

But where to begin? The reputation of IT recruitment agencies is pretty grim, though I have no direct experience. I have registered with one, but I am being sent invitations to be a senior C++ engineer in Berlin on a salary of €150,000 per annum which even I think is probably a bit over-ambitious for someone with no commercial experience.

(NB: If you want to see what I have done have a look at https://github.com/mcmenaminadrian).