Thoughts on Larry O’Briens article at devx.com
Larry O’Brien has written an introductory article on parallel programming with OpenMP on Windows and announced it in his blog. I enjoyed reading the article and think it is a really nice resource for people new to parallel programming. I would like to comment some parts of his article and since it does not have a comment section (and would be quite big, anyways), I will do it here:
Many programmers don’t understand this simple reality: When using mainstream programming languages the only way to take advantage of multiple cores is to explicitly use multithreading.
Wow, what an introduction :D. I hope this is meant as an opening phrase to catch some attention, because I sincerely hope it is not true. I know most of my students understand the fact that one way of taking advantage of multiple cores is to use multithreading. Most of my friends involved in programming do. But then again, this is not sufficient proof of anything. But I can imagine, Larry has a hard time prooving his assumption as well ;-). Also please note the emphasis in my version of the assumption. There are certainly other ways to take advantage of multiple cores (think MPI or Erlang). It is not on me to judge how mainstream these are, though…
OpenMP, a multiplatform API for C++ and Fortran
Don’t forget the C support :P…
simply wrapping a processor-intensive loop in a #pragma block can lead to about a 70 percent performance increase on a dual-core or dual-processor system
I just don’t know why he is so keen on the 70 percent number. He mentioned it earlier in one of his blog posts, which got picked up by Eric Sink. And I am sitting there scratching my head because this is quite contradictory to my experiences. He talks about embarassingly parallel programs and I do not see a reason why they should not scale to a 100 percent performance increase on a dual-core machine. In fact, I have witnessed speedups like that. Sometimes, you can even get better performance (called superlinear speedups). On the other hand of course it depends on what you measure. Just the parallel region? Then these speedups are possible. The whole program? Then Amdahls Law will bite and your performance increases will probably be lower. Still, there is no guarantee they will be anywhere near seventy percent. May as well be sixty percent, twenty percent or ninety percent. Totally depends on your application, algorithms, parallelization skills and I sometimes get the feeling, even the phases of the moon are involved from time to time 8O.
To err is human, but to really screw up you need shared state.
😆 Oh so true. Luckily for us, there are some tools availabe today to help…
In Visual C++ 2005, using OpenMP is as simple as adding the #pragma and compiling with a command-line switch (/openmp)
Do not try this with the free Express Edition, though. I downloaded it once to play with the OpenMP-support, and since I usually develop on Linux, this seemed like a great way to toy around with it. The switch is still there as described, yet OpenMP-support is completly missing from the Express Edition. Took me some time to figure this out. And I am still not sure, why they could not at least have deactivated the switch on the Preferences-Page…
Beyond that, what about when machines start having 16 and 32 cores? Today you might be able to get by without parallelizing your code, but that’s certainly not going to be the case in the not-so-distant future.
That future is here already when you look beyond Intel and AMD, e.g. in Suns UltraSPARC T1 processor which supports 32 threads running at the same time (not exactly mainstream I admit and its floating point performance is still not satisfactory, but you will not have to sell your soul to get one today either :D).
This closes my short musings on the article, and as I already said in my introduction it was a true pleasure to read. I was quite glad to see an article about OpenMP and parallel programming in general generating responses and discussions in the blogsphere and sure hope this does not die down quickly. As I already said in one of my previous posts: We are in the middle of a (parallel) revolution and this time it’s for real!