Thinking Parallel

A Blog on Parallel Programming and Concurrency by Michael Suess

The GridNow this is just typical. Last week I have finally gotten rid of my job board, because nobody was using it. Just a week later, my advisor asks me, where it went, because she would like to announce two job openings. Oh well. Murphy is everywhere. I won’t bring back the job board, because as we say in Germany One swallow does not make a summer (gotta love those word-for-word translations ๐Ÿ˜† ), but I will not miss the chance to tell you about the openings. (more…)

LinksContrary to what the update-frequency of this blog suggests, I am not dead. Not even near that, I am very healthy the only problem is that with my new job, new house in the works and family I am left with very little time for blogging. Life is about setting the right priorities and at this point in time unfortunately this means I will not be able to keep up to my usual posting frequency of one post per week. Instead I am aiming for one post per month now, while leaving the door open to write more as soon as my time permits. There is always enough time for a short newsflash on interesting articles on the net, though: (more…)

Phone BoothI usually don’t post press releases. But this one is different, since my favorite parallel programming system has almost reached its third major release. That’s right, OpenMP 3.0 is right around the corner and this is your chance to get your early fix. The language committee has worked very hard to make it a true revolution. I have told you about the major change before (tasking) and I am absolutely sure this release will push OpenMP even further into the main stream! So without any further ado, here is the official announcement: (more…)

ChoicesI have been very quiet on this blog lately, mostly because my new job and the creation of our new house have kept me rather busy. Having to get up at 6 in the morning to be able to bring our son to kindergarden has not helped my productivity in the evenings either – and of course that’s the time when I am usually writing for this blog. But anyways, I will try to be more productive in the future. What I would like to write about today are two blog posts from Tim Mattson, the first one called Parallel programming environments: less is more and a follow up called Is anyone dumb enough to think yet another parallel language will solve our problems? I MIGHT be!. While I think Tim raises some very valid points, I still believe his conclusions are in need of some more discussion! ๐Ÿ™‚ This post also includes a call to action at the bottom, so be sure to read it until the end! (more…)

ScaleWhile at the Parco-Conference two weeks ago, I had the pleasure to meet Ruud van der Pas again. He is a Senior Staff Engineer at Sun Microsystems and gave a very enlightening talk called Getting OpenMP Up To Speed. What I would like to post about is not the talk itself (although it contains some material that I wanted to write about here for a long time), but about the introduction he used to get our attention. He used an imaginary conversation, which I am reprinting here with his permission. Only one part of the conversation is shown, but it’s pretty easy to fill in the other one: (more…)

News on the RadioIt has been a while since I have done a news-roundup – therefore it is time for a new one. But before I start, let me pass on a few personal remarks about my present situation. I am in the process of finishing my PhD. right now and hope to submit it for review next week. Of course, this also means that I am rather busy at the moment, therefore the comments to each article I present here are not as verbose as you may be used to. I have also moved back to Leipzig, which is the beautiful city where I was born and raised. Starting in October, I will be working at a company called TomTom WORK, which is a division of TomTom. You may or may not know that company from the label on your navigation system in your car. I will be doing software development using C++. As far as I know, my job has nothing to do with parallel programming, but since I still have my pet project in the works (more on it really soon now) and one article per week is easily sustainable without working in the field directly, I intend to just continue this blog as is. (more…)

AxeThe very first step in every successful parallelization effort is always the same: you take a look at the problem that needs to be solved and start splitting it into tasks that can be computed in parallel. This sounds easy, but I can see from my students reactions that at least for some of them it is not. This article shows five ways to do just that. It is a short version for a blog article, the long version can be found in the book Introduction to Parallel Computing by Grama and others. (more…)

A BugIt has been a while since I have done this little experiment, but I still find the results interesting. As some of you may know, I teach a class on parallel programming (this is an undergraduate class, by the way – may I have a million dollar in funding now as well, please? ๐Ÿ˜Ž ). The first parallel programming system we teach to our students is OpenMP. There is no written test at the end of the class, but instead the students get to do assignments in teams of two people, which have to be defended before us. This is really educational for us (and I think for the students as well), because we get to see and find the mistakes our students make. I have done a little statistic on what mistakes are made by our students, and in this post you will find the results. Why am I posting a list of mistakes? Because I think learning from other peoples mistakes is almost as good as learning from my own, and usually saves quite a lot of time compared to the first option. ๐Ÿ˜€ (more…)

HypeMark Nelson does not believe in the hype about multi-cores. And he is right with several of his arguments. The world is not going to end if we cannot write our applications to allow for concurrency, that’s for sure. Since I am working on parallel machines all day, it is easy to become a little disconnected with the real world and think everybody has gotten the message and welcomes our new parallel programming overlords. Some of Marks arguments are a little shaky, though, as I hope to show you in this article. Is Mark right? I suspect not, but only time will tell. (more…)

Jay HoeflingerWhen I started my PhD.-thesis a couple of years ago, I took some time to look at auto-parallelizing compilers and research. After all, I wanted to work on making parallel programming easier, and the best way to do that would surely be to let compilers do all the work. Unfortunately, the field appeared to be quite dead at that time. There has been a huge amount of research done in the eighties and nineties, yet it all appeared to have settled down. And the compilers I tried could not parallelize more than the simplest loops. I have always been asking myself why this was the case, and when I had the chance to talk to Dr. Jay Hoeflinger, he had some very interesting answers for me. He agreed to let me re-ask these questions in an email interview and this is the result. Thanks, Jay, for sharing your knowledge! (more…)