Scoped locking vs. critical in OpenMP – a personal shootout
When reading any recent book about using C++ and parallel programming, you will probably be told that scoped locking is the greatest way to handle mutual exclusion and is in general the greatest invention since hot water. Yet, most of these writers are not talking about OpenMP, but about lower level threading systems. OpenMP has its own rock-star directive for taking care of synchronisation, and it is called critical. Nevertheless, it is possible to implement scoped locking in OpenMP and I was curious: What would an honest comparison between the two contenders result in?
Before I start, a word of warning: This article requires some C++ and OpenMP knowledge, as well as some knowledge on general parallel programming concepts. It also concentrates on a very special problem, which may not be interesting for everyone. You have been warned :P.
Lately, I have been experimenting with concurrent patterns in C++ and OpenMP. One of the first things I tried was to implement a Guard Object. The technique behind this is also commonly referred to as Scoped Locking. I start this article by shortly explaining what scoped locking means (if you already know that feel free to skip the first few paragraphs) and afterwards compare it to what can be done with the critical-directive in OpenMP. My personal conclusion is at the end of this article.
An introduction to Scoped Locking
Mutual exclusion is the most common form of synchronisation in threaded parallel programs. Most threading systems provide data types and API-functions to guarantee mutual exclusion (e.g. mutex-variables in Pthreads) and I will refer to these types as locks for the rest of this article. A common error when using these functions is to forget to unset the lock. Has never happened to you, you say? Think of the case, when there are break, exit or return statements inside the protected regions. These give us multiple ways to exit that region and screw up by forgetting the lock, especially when multiple authors work on the same code-base.
Enter guard objects to the rescue. These are using the Resource Acquisition Is Initialization technique to guarantee that the unset function is always called. What does this concretely mean? A guard object is a simple object, its constructor takes a lock as parameter and locks it right in the constructor. The lock is unlocked in the destructor of the guard object. Since in C++ the destructor is automatically called as soon as the object goes out of scope, the lock will always be unlocked as soon as the program leaves the protected code region. Sometimes an example says more than a thousand words, so I am providing one here (just a snippet, in OpenMP):
- #include "omp_guard.hpp"
- ...
- omp_lock_t my_lock;
- ...
- // initialize lock
- omp_init_lock (&my_lock);
- ...
- // go parallel
- #pragma omp parallel
- {
- // do something sensible in parallel
- ...
- {
- omp_guard my_guard (my_lock);
- // protected region starts here
- // only one thread at a time works here
- }
- // more parallel work
- ...
- }
- omp_destroy_lock (&my_lock);
The important parts are on line 14 to 17. On line 14 the guard object is declared, its constructor is automatically called (you will see that one in a second) and the lock is acquired. On line 17, the guard object goes out of scope and its destructor is called automatically, thereby releasing the lock. Not too difficult to understand, nevertheless quite a lot of overhead involved. We will get to that in a second, but before that as promised the actual implementations of the guard object in OpenMP. Lets start with the header:
- #ifndef OMP_GUARD_HPP_
- #define OMP_GUARD_HPP_
- #include <omp.h>
- /** This is a class for guard objects using OpenMP
- * It is adapted from the book
- * "Pattern-Oriented Software Architecture". */
- class omp_guard {
- public:
- /** Acquire the lock and store a pointer to it */
- omp_guard (omp_lock_t &lock);
- /** Set the lock explicitly */
- void acquire ();
- /** Release the lock explicitly (owner thread only!) */
- void release ();
- /** Destruct guard object */
- ~omp_guard ();
- private:
- omp_lock_t *lock_; // pointer to our lock
- bool owner_; // is this object the owner of the lock?
- // Disallow copies or assignment
- omp_guard (const omp_guard &);
- void operator= (const omp_guard &);
- };
- #endif /*OMP_GUARD_HPP_*/
And here goes the actual implementation:
- /** The class contained in this file is a guard object */
- #include "omp_guard.hpp"
- /** Construct guard object and acquire our lock */
- omp_guard::omp_guard (omp_lock_t &lock) : lock_ (&lock)
- , owner_ (false)
- {
- acquire ();
- }
- /** Explicitly set our lock */
- void omp_guard::acquire ()
- {
- omp_set_lock (lock_);
- owner_ = true;
- }
- /** Explicitly unset our lock.
- * Only unset it, though, if we are still the owner.
- */
- void omp_guard::release ()
- {
- if (owner_) {
- owner_ = false;
- omp_unset_lock (lock_);
- }
- }
- /** Destruct guard object, release the lock */
- omp_guard::~omp_guard ()
- {
- release ();
- }
Figured it out? Not much to explain there for me (of course you need to know a little C++ to understand the example). The owner variable is added so I can release the mutex manually using the release()-function, without unsetting the lock twice when the destructor is called (which is illegal). Bear with me, we are almost through with the code examples. Nevertheless, I have one more and this time it is the first example rewritten to use the critical-directive for comparison. Here it is:
- ...
- // go parallel
- #pragma omp parallel
- {
- // do something sensible in parallel
- ...
- #pragma omp critical
- {
- // protected region starts here
- // only one thread at a time works here
- }
- // more parallel work
- ...
- }
Quite a bit shorter, isn’t it? OK, I am done posting code snippets now, let’s get to work comparing them.
Scoped Locking vs. critical – the comparison
First of all, let’s see what both solutions do:
- cannot forget to unset the lock, as it is done automatically for scoped locking or detected as a compile-time error for critical
So the initial problem appears to be solved by both. But wait, there is (unfortunately) more to it. Here are some more reasons to use scoped locking and NOT the critical-directive:
- it is allowed to leave the locked region with jumps (e.g. break, continue, return), this is forbidden in regions protected by the critical-directive
- scoped locking is exception safe, critical is not
- all criticals wait for each other, with guard objects you can have as many different locks as you like – named critical sections help a bit, but name must be given at compile-time instead of at run-time like for scoped locking
Let’s go through them one by one: the first point is already a serious blow to the usability of critical. In my introduction I mentioned that it is easy to forget to unset a lock, when the protected region of code has multiple points of exit. Well, fact is, if you have code like that you cannot use the critical directive, because it is forbidden to jump out of a critical region by the OpenMP specification! Of course there are workarounds for that (such as filling the critical region with conditional clauses and flags or resorting to OpenMP-locks), but they take away the ease of use of critical in the process. The second point is closely related: It is not allowed to throw an exception out of a critical region when using the critical directive. Exceptions are not as common in C++ as they are in other languages (e.g. Java), but this is still a serious limitation. All of this works flawlessly with scoped locking.
The third point is a little more difficult to explain. Let’s say I want to implement a library function which can be called from within a parallel region. The function has been made thread-safe, data is passed into it and the critical directive is used to prevent concurrent access to this data. The problem with this scenario is that even though the programmer may pass different data into the function, the critical section will prevent the threads from working on them at the same time! I think I need one more piece of example code to clear the waters a bit here, first the example library function:
- void threadsafe_foo (int& data)
- {
- // do some work
- ...
- #pragma omp critical (threadsafe_foo_1)
- {
- data = rand ();
- }
- }
And now a piece of code that uses it:
- int data1, data2;
- #pragma omp parallel
- {
- threadsafe_foo (data1);
- // do some work
- threadsafe_foo (data2);
- }
See the problem? Even though threadsafe_foo() operates on completely different data, the critical region inside the function does not and cannot know that and therefore the threads are stalling each other unnecessarily. With scoped locking this can be avoided by passing a different lock into the function, when different data are to be protected – and the problem disappears. OK, so does this mean that everyone is supposed to forget about the critical directive in OpenMP and use scoped locking? I am afraid it is not that easy, since there are also advantages to using critical:
- no need to declare, initialize and destroy a lock
- no need to include, declare and define a guard object yourself, as critical is a part of the language (OpenMP)
- you always have explicit control over where your critical section ends, where the end of the scoped lock is implicit
- works for C and C++ (and FORTRAN)
- less overhead, as the compiler can translate it directly to lock and unlock calls, where it has to construct and destruct an object for scoped locking
The first two points on this list are about simplicity and ease of use. It is just less overhead to use the critical directive, because it is part of the language. You can count the lines in the examples above if you do not believe me (but actually it is fairly obvious without counting 🙄 ). The third point is easily worked around by opening a new scope around the protected code as done in the examples above, but if you are not used to that (and braces that seem to exist without a reason) this will look strange. The fourth point is only valid if you are (like me) also doing work in other languages, as scoped locking does not work in e.g. C, while the critical directive works in every language supported by OpenMP (C / C++ / FORTRAN).
The fifth point may be a serious one for your application. A critical directive can usually be translated directly to lock operations at compile-time by the compiler, there is no (or at least very little) overhead involved at run-time. This is not the case for scoped locking. For every thread entering the protected region, the run-time systems needs to construct and destruct an object – and I am not aware of any magic the compiler could do to reduce this overhead, but then again I am by no means a compiler expert, if you know of a technique, please be sure to leave a comment. This overhead may be negligible, but then again it may not be and may seriously impact your performance. The only way to be sure is to measure it yourself for your specific application.
Conclusion
Congratulations for reading this far 😆 . I promised you a shootout, what I did not promise you was a clear winner. I cannot deliver one either, as both solutions have their merits as explained in the previous sections. The ability to jump out of protected regions (via exceptions, break() or whatever) speaks heavily for scoped locking, yet the simplicity and performance of critical is nothing to sneeze at. I will put up a recommendation here that at least I am willing to adhere to: Start by using critical in your OpenMP programs and beware of its limitations. When you encounter any of the problems mentioned here, don’t spend your valuable time implementing ugly workarounds, but try switching to the more general solution of scoped locking instead. When you do, be sure to measure if you take a performance hit.
P.S.
That’s it for today. This has been my longest article so far, and a lot more low-level than the previous ones as well. If I have forgotten anything regarding the two techniques, if something was not sufficiently clear or if you have anything else to add, please do not hesitate to leave a comment and I will try to clarify / discuss the issues.