Parallel Programming News for Week 29/2007
It’s time again for a short survey of what has been going on lately on the net with regards to parallel programming. Actually, I wanted to post something different this week, but since this list has been growing so fast and I was not quite satisfied with my other article anyways, I decided to launch this one early. I hope you enjoy what I have dug up for you. And if not, be sure to leave a comment or leave me a note! Also, in case I have forgotten / not found an interesting article, feel free to add it for my benefit and that of your fellow readers. Thanks for caring!
- Lawrence Crawl can be seen and heard talking about the future of C++ and Threads in this Google Talk. Beware that this is 90min long and if you are a regular reader of this site, some of the content is probably well known to you. But I bet other parts are not…
- A guy who is calling himself MenTaLguY (for whatever reason) has an article called “Spin Buffers”: DO NOT USE, where he rightfully advises to stay away from spin buffers as introduced here. I could not agree more. For some reason, people are always trying to be smart and optimize away locks, without having even heard that there is such a thing called memory model that will screw you badly if you don’t really know what you are doing. Another example of this phenomenon is this thread on Joel on Software.
- Christian Terboven takes a first look at tasking in OpenMP in his weblog. He uses the simple C++-example of iterator-loops to show some of the power of the new constructs. Recommended Reading for anyone interested in the future of OpenMP.
- Tim Mattson has started blogging with this article describing the state of parallel programming in hardware and software. I have never met Tim in person, but know him from his work on the OpenMP language committee and through his book on Patterns for Parallel Programming. He promises to start blogging regularly in this article and I am looking forward to that.
- Geoffrey Wiseman has a preview about what’s in store concurrency-wise in Java SE 7. As I understand it, he describes a very early preview. The Fork/Join stuff he mentions certainly looks interesting, especially for Divide-and-Conquer type algorithms. The TransferQueue on the other hand did not excite me too much, maybe because it looks a bit like MPI_Ssend and I know I am not using that too often. But maybe thats just me 🙄
- There is a nice overview available about the parallel programming systems available based on Haskell. And boy, that list is impressive! Transactional Memory, systems based on parallelism hints, MPI, data parallel stuff, events – you name it, Haskell appears to have it. If only I had more time to look into all this stuff…
- Sean Koehl is asking the question What would you do with 80 cores? He highlights the field of model-based computing as a possibility to put your cores to good use. I have speculated about using that many cores myself in the past.
That’s it, folks, happy reading!