Monday, June 21, 2021

A Coroutine Can Be An Awaitable

This is part five of a five part series on coroutines in C++.

  1. Getting Past the Names
  2. Coroutines Look Like Factories
  3. co_await is the .then of Coroutines
  4. We Never Needed Stackful Coroutines
  5. A Coroutine Can Be An Awaitable

The relationship between awaitables and coroutines is a little bit complicated.

First, an awaitable is anything that a coroutine can "run after". So an awaitable could be an object like a mutex or an IO subsystem or an object representing an operation (I "awaited" my download object and ran later once the download was done).

I suppose you could say that an awaitable is an "event source" where the event is the wait being over. Or you could say events can be awaitables and it's a good fit.

A coroutine is something that at least partly "happens later" - it's the thing that does the awaiting. (You need a coroutine to do the awaiting because you need something that is code to execute. The awaitable might be an object, e.g. a noun.)

Where things get interesting is that coroutines can be awaitables, because coroutines (at least ones that don't infinite loop) have endings, and when they end, that's something that you can "run after". The event is "the coroutine is over."

To make your coroutine awaitable you need to do two things:

  1. Give it a co_await operator so the compiler understands how to build an awaitable object that talks to it and
  2. Come up with a reason to wake the caller back up again later.
Lewis Baker's cppcoro task is a near-canonical version of this.
  • The tasks start suspended, so they run when you co_await them.
  • They use their final_suspend object to resume the coroutine that awaited them to start them off.
Thus tasks are awaitables, and they are able to await (because they are croutines) and they can be composed indefinitely.

While any coroutine can be an awaitable, they might no be. I built a "fire and forget" async coroutine that cleans itself up when it runs off the end - it's meant to be a top level coroutine that can be run from synchronous code, thus it doesn't try to use coroutine tech to signal back to its caller. The actual C++ in the coroutine need to decide how to register their final results with the host app, maybe by calling some other non-coroutine execution mechanism.

Sunday, June 20, 2021

We Never Needed Stackfull Coroutines

This is part for of a five part series on coroutines in C++.

  1. Getting Past the Names
  2. Coroutines Look Like Factories
  3. co_await is the .then of Coroutines
  4. We Never Needed Stackful Coroutines
  5. A Coroutine Can Be An Awaitable

You've probably seen libraries that provide a "cooperative threading" API, e.g. threads of execution that yield to each other at specific times, rather than when the Kernel decides to. Windows Fibers, Boost Fibers, ucontext on Linux. I've been programming for a very long time, so I have written code for MacOS's cooperative "Thread Manager", back before cooperative multi-tasking was cool - at the time we were really annoyed that there was no pre-emption and that one bad worker thread would hang your whole machine. We were truly savages back then.

The gag for all of these "fiber" systems is the same - you have these 'threads' that only yield the CPU to another one via a yield API call - the yield call may name its successor or just pick any runnable non-running thread. Each thread has its own stack, thus our function (and its callstack and all state) is suspended mid-execution at the yield call.

Sounds a bit like coroutines, right?

There are two flavors of coroutines - stackful, and stackless. Fibers are stackful coroutines - the whole stack is saved to suspend the routine, allowing us to suspend at any point in execution anywhere.

C++20 coroutines are stackless, which means you can only suspend inside a coroutine itself - suspension depends on the compiler transforms of the coroutine (into a state machine object) to achieve the suspend.

When I first heard this, I thought: "well, performance is good but I wonder if I'll miss having full stack saves."

As it turns out, we don't need stack saves - here's why.

First, a coroutine is a function that has at least one potential suspend point. If there are zero potential suspend points, it's just a function, like from the 70s with the weed an the K&R syntax, and to state the obvious, we don't have to worry about a plain old function trying to suspend on us.

So if a coroutine calls a plain old function, the function can't suspend, so not being able to save the whole stack isn't important. We only need ask: "what happens if a coroutine calls a coroutine, and the child coroutine suspends?"

Well, we don't really "call" a child coroutine - we co_await it! And co_awaiting Just Works™.

  • When our parent coroutine co_awaits the child routine, the child coroutine (via the awaitable mediating this transfer of execution) gets a handle to the parent coroutine to stash for later. We can think of this as the "return address" from a stack function call, and the child coroutine can stash it somewhere in its promise storage.
  • When the child coroutine co-awaits (e.g. on I/O) this works as expected - we don't return to the parent coroutine because it's already suspended.
  • When the child coroutine finishes for real, its final suspend awaitable can return the parent coroutine handle (that we stashed) to transfer control back to the parent - this is the equivalent of using a RET opcode to pop the stack and jump to the return address.
In other words, our chain of co-routines forms a synthetic stack-like saving structure - the coroutine frames form a linked list (from children to parents) via the stashed handle to the coroutine) and each frame stores state for that function.



Saturday, June 19, 2021

co_await is the .then of Coroutines

This is part three of a five part series on coroutines in C++.

  1. Getting Past the Names
  2. Coroutines Look Like Factories
  3. co_await is the .then of Coroutines
  4. We Never Needed Stackful Coroutines
  5. A Coroutine Can Be An Awaitable

One more intuition about coroutines: the co_wait operator is to coroutines as .then (or some other continuation queueing routine) is to a callback/continuation/closure system.

In a continuation-based system, the fundamental thing we want to express is "happens after". We don't care when our code runs as long as it is after the stuff that must run before completes, and we'd rather not block or spin a thread to ensure that. Hence:

void dump_file(string path)

{

    future<string> my_data = file.async_load("~/stuff.txt");

    my_data.then([my_data](){

        printf("File contained: %s\n", my_data.get().c_str());

    });

}

The idea is that our file dumper will begin the async loading process and immediately return to the caller once we're IO bound. At some time later on some mystery thread (that's a separate rant) the IO will be done and our future will contain real data. At that point, our lambda runs and can use the data.

The ".then" API ensures that the file printing (which requires the data) happens after the file I/O.

With coroutines, co_await provides the same service.

async dump_file(string path)

{

    string my_data = co_await file.async_load("~/stuff.txt");

    printf("File contained: %s\n", my_data.c_str());

}

Just like before, the beginning of dump_file runs on the caller's thread and runs the code to begin the file load. Once we are I/O bound, we exit all the way back to the caller; some time later the rest of our coroutine will run (having access to the data) on a thread provided by the IO system.

Once we realize that co_await is the .then of coroutines, we can see that anything in our system with an async continuation callback could be an awaitable. A few possibilities:

  • APIs that move execution to different thread pools ("run on X thread")
  • Non-blocking I/O APIs that call a continuation once I/O is complete.
  • Objects that load their internals asynchronously and run a continuation once they are fully loaded.
  • Serial queues that guarantee only one continuation runs at a time to provide non-blocking async "mutex"-like behavior.
Given an API that takes a continuation, we can transform it into an awaitable by wrapping the API in an awaitable with a continuation that "kicks" the awaitable to resume the suspended coroutine.

Awaitables are also one of two places where you get to find out who any given coroutine is - await suspend gives you the coroutine handle of the client coroutine awaiting on you, which means you can save it and put it in a FIFO or anywhere else for resumption later.  You also get to return a coroutine handle to switch to any other code execution path you want.

For example, a common pattern for a coroutine that computes something is:
  1. Require the client to co_await the coroutine to begin execution - the awaitable that does this saves the client's coroutine handle.
  2. Use a final_suspend awaitable to return to the client whose coroutine we stashed.
(This is what cppcoro's task does.)

The other place we can get a coroutine handle is in the constructor of our promise, which can use from_promise to find the underlying coroutine it is attached to, and pass it to the return type, allowing handles to coroutines to connect to their coroutines.

Friday, June 18, 2021

Coroutines Look Like Factories

This is part two of a five part series on coroutines in C++.

  1. Getting Past the Names
  2. Coroutines Look Like Factories
  3. co_await is the .then of Coroutines
  4. We Never Needed Stackful Coroutines
  5. A Coroutine Can Be An Awaitable

In my previous post I complained about the naming of the various parts of coroutines - the language design is great, but I find myself having to squint at the parts sometimes.

Before proceeding, a basic insight about how coroutines work in C++.

You write a coroutine like a function (or method or lambda), but the world uses it like a factory that returns a handle to your newly created coroutine.

One of the simplest coroutine types I was able to write was an immediately-running, immediately-returning coroutine that returns nothing to the caller - something like this:

class async {

public:

    struct control_block {

        auto initial_suspend() { return suspend_never(); }

        auto final_suspend() { return suspend_never(); }

        void return_void() {}

        void unhandled_exception() { }

    };

    using promise_type = control_block;

};

The return type "async" returns a really useless handle to the client. It's useless because the coroutine starts on its own and ends on its own - it's "fire and forget".  The idea is to let you do stuff like this:

async fetch_file(string url, string path)

{

    string raw_data = co_await http::download_url(url);

    co_await disk::write_file_to_path(path, raw_data);

}

In this example, our coroutine suspends on IO twice, first to get data from the internet, then to write it to a disk, and then it's done.  Client code can do this:

void get_files(vector<pair<string,string>> stuff)

{

    for(auto spec : stuff)

    {

        fetch_file(spec.first,spec.second);
    }
}

To the client code, fetch_file is a "factory" that will create one coroutine for each file we want to get; that coroutine will start executing using the caller for get_files, do enough work to start downloading, and then return.  We'll queue a bunch of network ops in a row.

How does the coroutine finish? The IO systems will resume our coroutine once data is provided. What thread is executing this code? I have no idea - that's up to the IO system's design. But it will happen after "fetch_file" is done.

Is this code terrible? So first, yes - I would say an API to do an operation with no way to see what happened is bad. 

But if legacy code is callback based, this pattern can be quite useful - code that launches callbacks typically put the finalization of their operation in the callback and do nothing once launching the callback - the function is fire and forget because the end of the coroutine or callback handles the results of the operation.

Thursday, June 17, 2021

C++ Coroutines - Getting Past the Names

This is part one of a five part series on coroutines in C++.

  1. Getting Past the Names
  2. Coroutines Look Like Factories
  3. co_await is the .then of Coroutines
  4. We Never Needed Stackful Coroutines
  5. A Coroutine Can Be An Awaitable

I really like the C++20 coroutine language features - coroutines are a great way to write asynchronous code, and the language facilities give us a lot of power and client code that should be very manageable to write.

My one gripe is with the naming of a few of the various parts of the coroutine control structures. This shouldn't matter that much because only library writers have to care, but right now we're in the library wild west (which is fine by me - I'm old enough to be very skeptical of "the standard library should have everything" and happy to roll my own) so we can't avoid them.

A coroutine's overall behavior is mediated by its declared return type, which will typically be a struct or class from library code that looks something like this:

template<typename T>

struct task {

  struct task_promise_thingie {

    auto initial_suspend() { return std::suspend_always(); }

    ...

  };

  using promise_type = task_promise_thingie;

};

The inner struct (it could be separate but the nested structure expresses the relationship to client code in ar reasonable way I think) is called the "promise type" and ... man, do I hate that name. Partly because I consider the whole promise-future async model from C++11 to be bad, and partly because the promise type" does a lot of things, of which the promise might be the least important.

I'm not sure what I would have called the promise type, but I'd lean toward "traits" or "control block", because the promise type does a few key things for you:

  1. It defines the beginning and end of the lifecycle of your co-routine. Does the coroutine immediately start when created, or does it immediately suspend until someone kicks it? Does the coroutine run off the end and die on its own or wait until someone kills it off.
  2. It defines the flow control on the CPU at the beginning and end of the co-routine. Because the beginning and end of a coroutine's life are controlled by awaitables, you can launch an arbitrary coroutine at either point! For example, if the coroutine delivers its output to another coroutine, you can, at final suspend, resume that parent co-routine on whatever thread you ended on.
  3. It defines the "handle" type that client code will see when creating the coroutine.
This brings me to my second rant - the outer handle that client code gets when "calling" the coroutine (which is really constructing it) is called the "return type". This is harder to yell about because it is...a return type...but once again, I think this isn't the most important thing.

I would say the most important thing about the coroutine's return type is that it's the only access to the coroutine that client code gets. So if the client code that calls/builds the coroutine is going to have any further interaction with it, the return type is the "handle".  Left to my own byzantine naming instincts, I might have called it a "client handle" or something.