Convert Existing C++ Code To Run As Thread



Often you have this requirement to execute certain code as a thread, and it is one of those “occasionally running” code where doing thread-pool for it is simply overkill. You can run a thread from the code and pass your required code as a function, however, most times all thread management coming with that single thread is also too much of a work. In this blog, we will see simple technique to solve this issue. Moreover, we will create a generic executor class template which can be used with any future code you will write. You will need C++14 or higher to be able to use this technique.


Newer C++ offers multiple templates for multi-threading related requirements. Here are a few we will use for the sample code in this blog:


std::packaged_task

The class template std::packaged_task wraps any callable target (function, lambda expression, bind expression, or another function object) so that it can be invoked asynchronously. Its return value or exception thrown is stored in a shared state which can be accessed through std::future objects.


std::future

The class template std::future provides a mechanism to access the result of asynchronous operations:

  • An asynchronous operation (created via std::async, std::packaged_task, or std::promise) can provide a std::future object to the creator of that asynchronous operation.

  • The creator of the asynchronous operation can then use a variety of methods to query, wait for, or extract a value from the std::future. These methods may block if the asynchronous operation has not yet provided a value.

  • When the asynchronous operation is ready to send a result to the creator, it can do so by modifying shared state(e.g. std::promise::set_value) that is linked to the creator's std::future.


Now let’s create a reusable template class which can run any piece of code as a thread.



Here’s what this simple piece of code is doing:

  1. Defines a template class for executing any function object as a thread.

  2. Signature of a function object (input and output type) are templatized through template parameters.

  3. Input function object is packaged as a std::packaged_task object.

  4. We retrieve std::future from the new std::packaged_task object created around input function, and then pass the packaged task to thread, to execute as a detached thread. We also store retrieved std::future for future result retrieval from the thread(s).

  5. A new interface (function) is added in the class which will wait for all threads to finish (join) and return results from all threads together.

Now this simple piece of code in C++ is much more powerful than it appears. You can turn any small piece of code to run as a thread. Let’s take an example client code (I will save above template in threaded_executor.h, and sample client in a file called sample_client.cpp).




Client code here is very simple and is meant for demonstration only. In client code above, we don’t have any locks used, but in real world code, you will also need synchronization mechanism between threads. You can always define a lock at higher level and pass it to all threads to synchronize activity across threads.


In the example client code above for our new executor class, we define integer number with value of 10 and then create two threads – one which increments value of the number by 10, and another thread which subtracts the value of the number by 5. We pass actual code to increment or decrement the number through lambda functions, though these can be defined as independent functions too. Passing code through lambda allows us to say pass existing code to run as a thread. Also, since lambda capture is by-reference, we don’t really need to pass ‘number’ to (or return it from) the threaded code, so put ‘void*’ as placeholder for input and pass 'nullptr' there. In the example client code above, Lambda (or independent functions) passed to executor class need signature with ‘void*’ as input type and integer as return type. Function signature is something we defined when our 'ThreadedExecutor' class is declared, and compiler will enforce the definition for us.


We also put in small delay at the start of addition or subtraction to make sure both those code execute almost the same time (as 'AddAndExecuteTask' will always immediately start thread, so for simple example case above, it will always execute our tasks as thread in the same order of their call otherwise).


Let’s compile and run this simple client code.


[root@wte]# gcc threaded_client.cpp -o threaded_client -std=c++17 -I ./ -lstdc++ -lpthread


[root@wte]# ./threaded_client

Thread-0 Result=20

Thread-1 Result=15

Number value after both threads completed: 15


[root@wte]# ./threaded_client

Thread-0 Result=15

Thread-1 Result=5

Number value after both threads completed: 15


[root@wte]# ./threaded_client

Thread-0 Result=20

Thread-1 Result=15

Number value after both threads completed: 15


As we can see (and expect), final value of the number will always be the same, and order of execution of the two arithmetic transactions will be random because those two transactions will run the same time as threads.


Now this simple piece of code above can be used to convert any of your existing C++ code to run as a thread! Just include our new executor class in existing code and capture piece of code you want to run as a thread inside lambda (or as a standalone function) and pass it to the executor, and that’s it!