I am working on a simulation environment for a NoC. The idea is that we can take a memory reference string – generated by Valgrind – and then test how long it will take to execute with different memory models for the NoC. It’s at an early stage and still quite crude.
The data set it is working with is of the order of 200GB, though that covers 18 threads of execution and so, very roughly speaking, it is 18 data sets of just over 10GB each. I have written some parallel Groovy/Java code to handle it and the code seems to work, though there is a lot of work to be done.
I am running it on the University of York’s compute server – a beast with 32 cores and a lot of memory. But it is slow, slow, slow. My current estimate is that it would take about 10 weeks to crunch a whole dataset. The code is slow because we have to synchronise the threads to model the inherent parallelism of the NoC. The whole thing is a demonstration – with a vengeance – of Amdahl’s Law.
Even in as long a project as a PhD I don’t have 10 weeks per dataset going free, so this is a problem!
- ‘Missing heat’ discovery prompts new estimate of global warming: Arctic warming fast (sciencedaily.com)
- MuP21 – A High Performance MISC Processor (1995) (ultratechnology.com)
- Added supported formats caching to VideoCaptureManager. (chromiumcodereview.appspot.com)
- Raspbian – valgrind unhandled instruction 0xF1010200 (raspberrypi.org)