![]() |
Arene Base
Fundamental Utilities For Safety Critical C++
|
Arene Base provides facilities for synchronizing operations across threads.
There is no singular public header for this subpackage, as each facility is bespoke to a given usecase and there is unlikely to be overlap between usage.
The Bazel target is
The synchronization subpackage provides backports of std::latch and std::barrier, as well as additional facilities for synchronizing operations between threads.
Note: use of these synchronization mechanisms may lead a thread to block. Care should be taken to ensure this is permitted by the relevant coding standard.
arene::base::latch is a backport of std::latch. It is a single use latching synchronization object. The constructor takes the number of events needed to trigger the latch.
Threads then call the count_down or arrive_and_wait member functions to decrease the count.
Threads can call wait or arrive_and_wait member functions to wait for the count to reach zero. This is a blocking wait that cannot be interrupted. When the count reaches zero, all waiting threads are unblocked, and can return from their wait or arrive_and_wait calls. Calls to wait after the count is already zero return immediately.
For example, you can use a latch to wait for background processing to complete.
The call to done.wait() will block until all the threads have executed done.count_down().
Header:
arene::base::barrier is a backport of std::barrier. It is a "loop" synchronization primitive, used to synchronize a number of threads involved in a macro task with repeated synchronization points. The "loop" may involve one or more loop statements such as for or while, or may be a "conceptual" loop where each thread involved in the task performs a series of steps with synchronization points between.
The template parameter is the type of the completion function that is executed at the end of each loop iteration; if it is omitted, then a default do-nothing completion function is used.
The constructor takes a count of threads that will need to arrive at the barrier each loop iteration, and an optional completion function, which must be of the specified type. If the completion function is omitted, then a default-constructed instance is used instead; this will result in an error if the type is not default-constructible.
Each loop iteration, threads arrive at the barrier, and wait for it to be signalled. When the specified number of threads have arrived, the completion function is invoked, waiting threads are woken, and the barrier is reset for the next loop iteration.
Threads arrive at the barrier by calling one of the following functions:
arene::base::barrier::arrive: This arrives at the barrier and returns an arene::base::barrier::arrival_token which must later be passed to arene::base::barrier::wait by that thread in order to wait for the loop iteration to end. If the loop iteration has already ended before the call to arene::base::barrier::wait then the call will return immediately.arene::base::barrier::arrive_and_drop: This arrives at the barrier, but does not wait for the loop iteration to end, and the thread may no longer arrive at the barrier in future iterations: the thread count is permanently decreased by one.arene::base::barrier::arrive_and_wait: This is equivalent to calling b.wait(b.arrive()) for a arene::base::barrier instance named b.The thread that arrives at the barrier last, causing the remaining thread count to decrease to 0, executes the completion function, then wakes all waiting threads blocked in wait or arrive_and_wait. Any threads that have called arrive but not wait will not block when they call wait. The counter is then reset and threads can begin arriving for the next loop iteration.
A thread may pass a numerical value to arrive, aggregating arrivals from multiple threads. The total number of arrivals during a given loop iteration must not exceed the current thread count of the barrier object; too many arrivals is a precondition violation
arrive must be careful to not access any resources which require synchronization with the completion function between the call to arrive and the call to wait, as the completion function will be invoked based on the number of threads that arrive, not the number of threads that wait. Failure to maintain this assumption will result in data races.The example below shows a task being split across multiple worker threads, which synchronize using an arene::base::barrier:
Here, the worker threads each do their portion of the work, and then wait for the others to finish before finish_task is invoked, and then all the threads start work on the next iteration. The main thread does whatever processing it needs to do, and then signals that the background threads should stop, and waits for them to finish.
Header:
arene::base::manual_reset_event is a low level synchronization object. It has two states: signalled and not signalled. Initially it is not signalled.
When it is not signalled, then any threads that call arene::base::manual_reset_event::wait will block until it is signalled. Calling arene::base::manual_reset_event::signal will change the state to signalled, and wake any threads blocked in wait.
When the event is signalled, then any threads that call wait will return immediately.
Calling arene::base::manual_reset_event::reset will change the state from signalled to not signalled. If reset is called immediately after signal, then it is possible that some of the waiting threads will not have woken, and will thus miss the signal, and remain blocked. If this is undesirable, then an additional mechanism should be used to ensure that all threads have woken before calling reset.
Once the event has been changed back to non signalled, threads that call wait will now block again.
Header: