1382 | | 1. We dynamically allocate a stack frame on entry, and that dynamic memory allocation sticks around until the resumable function completes. |
1383 | | 2. We always construct a promise-future pair for the result (or whatever synchronisation object `coroutine_traits` says) on entry to the function. |
1384 | | 3. If `async_call()` always returns a signalled future, we add the result to total, signal the future and return the future. |
1385 | | 4. If `async_call()` ever returns an unsignaled future, we ask the unsignaled future to resume our detached path when it is signalled. We suspend ourselves, and return our future immediately. At some point our detached path will signal the future. |
| 1382 | 1. We dynamically allocate a stack frame on entry, and that dynamic memory allocation sticks around until the resumable function completes. |
| 1383 | 2. We always construct a promise-future pair for the result (or whatever synchronisation object `coroutine_traits` says) on entry to the function. |
| 1384 | 3. If `async_call()` always returns a signalled future, we add the result to total, signal the future and return the future. |
| 1385 | 4. If `async_call()` ever returns an unsignaled future, we ask the unsignaled future to resume our detached path when it is signalled. We suspend ourselves, and return our future immediately. At some point our detached path will signal the future. |
1414 | | All this is great if you are on Microsoft's compiler, but what about the rest of us before C++ 1z? Luckily [http://olk.github.io/libs/fiber/doc/html/ Boost has a conditionally accepted library called Boost.Fiber] which was one of the C++ 11/14 libraries reviewed above and this, with a few caveats, provides good feature parity with proposed C++1z coroutines. Boost.Fiber provides a mirror image of the STL threading library, so: |
1415 | | |
1416 | | || `std::thread` || => || `fiber::fiber` || |
1417 | | || `std::this_thread` || => || `fiber::this_fiber` || |
1418 | | || `std::mutex` || => || `fiber::mutex` || |
1419 | | || `std::condition_variable` || => || `fiber::condition_variable` || |
1420 | | || `std::future<T>` || => || `fiber::future<T>` || |
1421 | | |
1422 | | Rewriting the above example to use Boost.Fiber instead (TODO): |
| 1414 | All this is great if you are on Microsoft's compiler, but what about the rest of us before C++ 1z? Luckily [http://olk.github.io/libs/fiber/doc/html/ Boost has a conditionally accepted library called Boost.Fiber] which was one of the C++ 11/14 libraries reviewed above and this, with a few caveats, provides good feature parity with proposed C++1z coroutines at the cost of having to type out the boilerplate by hand. Boost.Fiber provides a mirror image of the STL threading library, so: |
| 1415 | |
| 1416 | || `std::thread` || => || `fibers::fiber` || |
| 1417 | || `std::this_thread` || => || `fibers::this_fiber` || |
| 1418 | || `std::mutex` || => || `fibers::mutex` || |
| 1419 | || `std::condition_variable` || => || `fibers::condition_variable` || |
| 1420 | || `std::future<T>` || => || `fibers::future<T>` || |
| 1421 | |
| 1422 | Rewriting the above example to use Boost.Fiber and Boost.Thread instead: |
1429 | | std::future<int> accumulate() |
1430 | | { |
1431 | | int total=0; |
1432 | | for(size_t n=0; n<10; n++) |
1433 | | total+=await async_call(); // Note the await keyword, this is where the function might suspend |
1434 | | return total; |
| 1429 | boost::fibers::future<int> accumulate() |
| 1430 | { |
| 1431 | boost::fibers::packaged_task<int()> t([]{ |
| 1432 | int total=0; |
| 1433 | for(size_t n=0; n<10; n++) |
| 1434 | { |
| 1435 | boost::fibers::promise<int> p; |
| 1436 | boost::fibers::future<int> f(p.get_future()); |
| 1437 | async_call().then([p=std::move(p)](boost::future<int> f){ |
| 1438 | if(f.has_error()) |
| 1439 | p.set_exception(f.get_exception_ptr()); |
| 1440 | else |
| 1441 | p.set_value(f.get()); |
| 1442 | }); |
| 1443 | total+=f.get(); |
| 1444 | } |
| 1445 | return total; |
| 1446 | }); |
| 1447 | boost::fibers::future<int> f(t.get_future()); |
| 1448 | boost::fibers::fiber(std::move(t)).detach(); |
| 1449 | return f; |
1444 | | TODO |
1445 | | |
1446 | | Everything i/o async. Boost.Fiber. |
1447 | | |
1448 | | Automatic WinRT friendly. |
| 1459 | As you can see, there is an unfortunate amount of extra boilerplate to convert between Boost.Thread futures and Boost.Fiber futures, plus more boilerplate to make `accumulate()` into a resumable function -- essentially one must boilerplate out `accumulate()` as if it were a kernel thread complete with nested lambdas. Still, though, the above is feature equivalent to C++ 1z coroutines, but you have it now not years from now (for reference, if you want to write a generator which yields values to fibers in Boost.Fiber, simply write to some shared variable and notify a `boost::fibers::condition_variable` followed by a `boost::fibers::this_thread::yield()`, writing to a shared variable without locking is safe because Fibers are scheduled cooperatively). |
| 1460 | |
| 1461 | So after all that, you might be wondering what any of this has to do with: |
| 1462 | |
| 1463 | * Threads. |
| 1464 | * i/o. |
| 1465 | * Callbacks, including `std::function`. |
| 1466 | |
| 1467 | We'll take the last first. Traditionally if you needed to issue a callback to some user supplied `std::function` or even a C function pointer, if it was guaranteed lightweight you left that invocation inline to your code or if was guaranteed threadsafe you pushed it onto some thread pool to be executed later and so on. With resumable functions/coroutines or Boost.Fibers, you have a new option: ''execute the callback sometime soon after I exit on this thread''. And that opens a number of rather neato design opportunities perhaps illustrated by this legacy design pattern taken from proposed Boost.AFIO: |
| 1468 | |
| 1469 | {{{#!c++ |
| 1470 | struct immediate_async_ops |
| 1471 | { |
| 1472 | typedef std::shared_ptr<async_io_handle> rettype; |
| 1473 | typedef rettype retfuncttype(); |
| 1474 | size_t reservation; |
| 1475 | std::vector<enqueued_task<retfuncttype>> toexecute; |
| 1476 | |
| 1477 | immediate_async_ops(size_t reserve) : reservation(reserve) { } |
| 1478 | // Returns a promise which is fulfilled when this is destructed |
| 1479 | void enqueue(enqueued_task<retfuncttype> task) |
| 1480 | { |
| 1481 | if(toexecute.empty()) |
| 1482 | toexecute.reserve(reservation); |
| 1483 | toexecute.push_back(task); |
| 1484 | } |
| 1485 | ~immediate_async_ops() |
| 1486 | { |
| 1487 | for(auto &i: toexecute) |
| 1488 | { |
| 1489 | i(); |
| 1490 | } |
| 1491 | } |
| 1492 | private: |
| 1493 | immediate_async_ops(const immediate_async_ops &); |
| 1494 | immediate_async_ops &operator=(const immediate_async_ops &); |
| 1495 | immediate_async_ops(immediate_async_ops &&); |
| 1496 | immediate_async_ops &operator=(immediate_async_ops &&); |
| 1497 | }; |
| 1498 | }}} |
| 1499 | |
| 1500 | What this does is lets you enqueue packaged tasks (here called enqueued tasks) into an `immediate_async_ops` accumulator. On destruction, it executes those stored tasks, setting their futures to any results of those tasks. What on earth might the use case be for this? AFIO needs to chain operations onto other operations, and if an operation is still pending one appends the continuation there, but if an operation has completed the continuation needs to be executed immediately. Unfortunately, in the core dispatch loop executing continuations there immediately creates race conditions, so what AFIO does is to create an `immediate_async_ops` instance at the very beginning of the call tree for any operation. Deep inside the engine, inside any mutexes or locks, it send continuations which must be executed immediately to the `immediate_async_ops` instance. Once the operation is finished and the stack is unwinding, just before the operation API returns to user mode code it destructs the `immediate_async_ops` instance and therefore dispatches any continuations scheduled there without any locks or mutexes in the way. |
| 1501 | |
| 1502 | This pattern is of course exactly what coroutines/fibers give us -- a way of scheduling code to run at some point not now but soon on the same thread. As with AFIO's `immediate_async_ops`, such a pattern can dramatically simplify a code engine implementation, and if you find your code expending much effort on dealing with error handling in a locking threaded environment where the complexity of handling all the outcome paths is exploding, you should very strong consider a coroutine based refactored design instead. |
| 1503 | |
| 1504 | Finally, what does making your code resumable ready have to do with i/o or threads? If you are not familiar with WinRT which is Microsoft's latest programming platform, well under WinRT ''nothing can block'' i.e. no synchronous APIs are available whatsoever. That of course renders most existing code bases impossible to port to WinRT, at least initially, but one interesting way to work around "nothing can block" is to write emulations of synchronous functions which dispatch into a coroutine scheduler instead. Your legacy code base is now 100% async, yet is written as if it never heard of async in its life. In other words, you write code which uses synchronous blocking functions without thinking about or worrying about async, but it is executed by the runtime as the right ordering of asynchronous operations automagically. |
| 1505 | |
| 1506 | What does WinRT have to do with C++? Well, C++ 1z should gain both coroutines and perhaps not long thereafter Networking which is really ASIO, and ASIO already supports coroutines via Boost.Coroutine and Boost.Fiber. So if you are doing socket i/o you can already do "nothing can block" in C++ 1z, or shortly after the 1z release. I'm hoping that AFIO will contribute asynchronous Filesystem and file i/o, and it is expected a medium term Boost.Thread rewrite should become resumable function friendly, so one could expect in not too distant a future that if you write exclusively using Boost facilities then your synchronously written C++ program could actually be entirely asynchronous in execution, just as on WinRT. That could potentially be huge as C++ suddenly becomes capable of Erlang type tasklet behaviour and design patterns, very exciting. |
| 1507 | |
| 1508 | Obviously everything I have just said should be taken with a pinch of salt as it all depends on WG21 decisions not yet made and a lot of Boost code not yet written. But I think this vision of the future is worth considering as you write your C++ 11/14 code today. |
| 1509 | |