Tag Archives: audio-programming

Programming with lightweight asynchronous messages: some basic patterns

This post introduces some basic asynchronous message passing patterns that I’ve found useful while implementing real-time audio software. The patterns apply to in-process shared memory message passing. For the most part this post is intentionally abstract. I won’t talk too much about concrete implementations. I do assume that the implementation language is something like C/C++ and that the message queues are implemented using lock-free ring buffers or similar.  For a motivation for this style of programming see my earlier post.

This post isn’t the whole story. Please keep in mind that there are various ways to combine the patterns, and you can layer additional schemes on top of them.

Meta

I wanted to enumerate these basic patterns to give me some clarity about this approach to writing concurrent software. While writing this post I started to look for a mathematical formalism that could capture the semantics of this kind of asynchronous message exchange. I looked at process algebras such as the actor model and pi-calculus but they didn’t seem to model asynchronous message exchange, distributed state and atomic state transitions the way I have done it here. To me the approach presented here seems similar to what you might use to model bureaucratic processes involving asynchronous exchange of paper forms and letters.  If you know of an existing formalism that might be used to model these patterns please let me know in the comments, I’d love to hear about it. Thanks!

Introduction

Communication between execution contexts

This post focuses on patterns of asynchronous message exchange between a pair of execution contexts.

I use the term “execution context” instead of “thread” because I’m thinking not only of threads but also of interrupt handlers, signal handlers, timer callbacks, and audio callbacks. An execution context might also be the methods of an object that only operates in one thread at a time, but may migrate between threads during the course of program execution.

Some execution contexts impose certain requirements, such as not blocking (see my earlier post for a discussion of what not to do in audio callbacks).

I assume that the contexts share the same address space, so it is possible to pass pointers to objects between contexts.

I assume that processing a message is atomic and non-re-entrant with respect to the receiving execution context.

Passing messages via queues

When I talk about asynchronous message passing I have in mind an implementation where messages are communicated between threads using queues. The sender and receiver operate asynchronously. The sender doesn’t usually block or wait when sending a message. The receiver chooses when to collect and interpret messages. The queues are light-weight data structures often implemented using lock-free ring buffers. Writing to and reading from a queue are non-blocking operations. The diagram below illustrates the scheme:

In all of the diagrams in this post the left hand side is one context, the right hand side is another context (sometimes called the sender and receiver, or source and target). The half-headed arrow indicates that a message is passed asynchronously in the direction of the arrow by writing a message into the queue from one context and reading the message from the queue in the other context at some later time.

From the perspective of each of the two contexts, code for the above diagram might look something like this:

// execution context A:
{
    // post a message:
    q.write( m );
}

// ... some time later
// execution context B:
{
    // receive and interpret pending messages
    while( !q.empty() ){ // in this case we don’t block if the queue is empty
        Message m = q.read();
        interpret_message( m );
    }
    ... continue other processing
}

I’ve shown messages being passed by value (i.e. copied into a ring buffer). For small messages and single-reader single-writer queues this usually works out well. I mention some other variations later.

To send a message in the opposite direction (from context B to context A) a second queue is needed:

For the rest of this post I’m not going to explicitly notate the contexts and queues: when you see a half-headed arrow it means that the message is being passed asynchronously from one context to another via a non-blocking queue.

Program organisation: a network of message queues and execution contexts

There are different ways to connect between the various execution contexts and objects of a program with message queues. Some options that can be used in combination are:

  • Fixed topology of contexts and queues. A static network of queues are used to pass messages between contexts. In the extreme, contexts may “own” a queue for receiving messages and run an event loop — like the post message/event mechanisms of Win32 or Qt.
  • Objects maintain their own receive queues. Objects can migrate between contexts and a message can be sent to the object irrespective of what context it is executing in. In this situation queues are a bit like ports in the ROOM formalism.
  • Message queues or “channels” as first-class objects. E.g. a queue for returning results might be passed via a message to a worker thread — when the worker thread completes it writes the result to the queue.

A few concrete use cases are:

  • A pair of queues are used to communicate messages and results between a main (GUI) controller thread and an audio callback context. This is the primary communication architecture used in SuperCollider. It’s described in my book chapter here.
  • Asynchronous disk i/o commands are sent to an i/o scheduler thread via a queue, each command specifies where the result of the i/o operation should be enqueued. This is somewhat similar to I/O Completion Ports.
  • A thread receives audio data from a network socket and enqueues a packet record on a queue, the audio thread dequeues packet records and decodes them.

Implementation mechanisms (in brief)

There are a number implementation details that I won’t go in to detail about here, including: message implementation strategies, queue implementation, and delivery mechanisms. In the future maybe I’ll write about these in more detail. For now I’ll give a brief sketch:

Message implementations

A message queue transports messages from one execution context to another. A message is a bundle of parameters along with some kind of selector that specifies how the message should be interpreted or executed. In principle there might only be one type of message, in which case only the parameters need to be enqueued.  However, a message queue often transports a heterogeneous set of messages, thus usually some kind of selector is needed. For example,  something like the following struct, with an enumeration used to indicate the message type (this is similar to a Win32 MSG structure):


enum MessageCommandCode { DO_ACTION_1, DO_ACTION_2, DO_ACTION_3};
struct Message{
    enum MessageCommandCode commandCode; // receiver interprets the command code
    int messageData1;
    bool messageData2;
}

Or a function pointer that is executed when the message is interpreted (this is the way SuperCollider does it for example, see my article about SuperCollider internals for more info):

struct Message{
    CommandFunctionPtr commandFunc; // receiver calls the commandFunction passing the message and context-specific state
    int messageData1;
    bool messageData2;
}

You could do something more elaborate with C++11 lambdas, functors etc. But keep in mind that in a lock-free scenario you want to keep things simple and avoid overheads or allocations when sending messages.

An important thing to notice is that an enumerated command code can be interpreted in multiple ways (for example, by different states in a state machine) where as a function pointer can only be executed in one way (the latter case is more like code blocks in Apple’s Grand Central Dispatch). There are pros and cons of each approach.

For efficiency and to avoid blocking I recommend that the message is moved in to the queue when sending, and moved out of the queue when receiving – data is transferred by value. An optimization is to initialize/construct the message in-place in the queue buffer, rather than copying it there. Similarly,  messages can be interpreted/executed directly from queue storage.  The queue may support variable-length messages.

If you want your message data to be more expressive you could use a union or object with different sub-types for each message type.

Message could be implemented as polymorphic Command objects (cf. Command Design Pattern) although allocating and passing polymorphic objects in queues might get complicated. Depending on the number of commands involved, introducing polymorphic messages using virtual functions might just be overkill.

Messages might be timestamped and stored in a priority queue in the target context for future execution. Or messages might represent prioritized work (such as disk i/o) to be performed by the target context in priority order.

Ultimately there are many choices here. There are trade-offs between simplicity of implementation, ease of extension and complexity and readability of both the client code and the message passing infrastructure code.

See also: Command Pattern, Command queue, Message queue, message/command implementation in SuperCollider.

Queue implementations

This post focuses on situations where there is only ever a single-reader and single-writer to any queue. Partly this is because lock-free single-reader single-writer queues are relatively simple and efficient, and partly because they’re often all you need. That said, it is sometimes desirable or necessary to introduce multiple reader and/or multiple writer mechanisms: for example when a server thread has multiple clients sending it requests, or a single queue is used to distribute work to multiple worker threads. These cases can be implemented by using a lock at one end of a single-reader single-writer queue, or by using node-based lock-free queues such as the one by Michael and Scott.

Delivery mechanisms

The patterns in this post do not require or imply synchronous rendezvous or blocking/waiting when sending or receiving messages. A context might choose to block waiting for messages to arrive, or just as likely, it could poll the queue  for new messages (without blocking) at certain synchronization points. Alternatively, some other mechanism might be used to ensure that the messages get processed. Some possible mechanisms are:

  • An audio callback polls for messages at the start of each callback.
  • A worker thread blocks waiting for an event/semaphore that is signaled when new messages are available.
  • A GUI thread processes messages when it is notified that new messages are available (via a window system event)
  • A GUI thread runs a periodic timer callback to update the GUI, the callback polls for new messages from another thread (without blocking)
  • A consumer issues a get message whenever it finishes using a datum and processes received response messages whenever its local datum pool is exhausted.

Periodic polling is often criticized, and for good reason. Please keep in mind that in the cases above that employ polling, there is already a periodic task that is running for other reasons. For example to generate audio or update the GUI.

Now, on to the patterns…

The Patterns

1. Simple state management patterns

1.1 Mutate target state

One simple use for a command queue is to send messages that changes (mutates) some context-local state in the receiver. In the example below, a message is sent to change the colour of node x to red:

Note that the message is asynchronous. There is no locking involved because the data structure x is local to the right hand context. The x.setColor(red) code is executed in the right hand context. For comparison, the equivalent code in a lock based system might look like this:

// alternative implementation using locks
Node& x;
Mutex m; // protects x
// ... in context A: 
    m.lock();
    x.setColor(red);
    m.unlock();

// ... in context B:
    m.lock();
    ... all operations on the structure and x must be protected by the mutex...
    m.unlock();

One way to think about it is that in a lock-based system, the data comes to the code (context A gets access to object x), whereas in the message passing case the function call and parameters are sent to the data structure x in the receiving context (the function executes on context B).  If context A and B are running on separate CPUs you can imagine different cache transfer patterns if x is accessed from context A, compared to passing a message (containing a selector, a reference to x, and the colour red) to context B where x is already in the CPU’s cache.

1.2 Get state vs Observer / state change notifications

In principle you could implement an asynchronous state request (”get”) message to asynchronously retrieve a state. This is a “pull” model:

More likely you’d use some form of asynchronous Observer pattern where changes to x in the right hand context trigger change notification messages to be sent to the left hand context. This is a “push” model:

In the classical Observer pattern the receiver explicitly attaches/subscribes and detaches/unsubscribes from receiving notifications. For state that is constantly in flux (such as audio levels for VU meters) you might always post periodic notifications rather than requiring explicit subscription.

1.3 Synchronised state / cached state

Often, contexts operate in lock step, in the sense that a client knows (or categorically assumes) the state of the remote context. In this situation there is no need to explicitly request server state.

If there is a master/slave relationship between contexts/states the master can keep a local copy of the slave state — when the master needs to know the state of a slave it can synchronously query the local copy rather than sending an asynchronous message to the slave. Such mechanisms are akin to using a thread-local proxy cache with write-through semantics.

2. Generalized object transfer patterns

We assume shared memory, so transferring objects from one context to another requires only that an object pointer be passed in the message.

The generalized object transfer patterns below (Link, Unlink and Swap) deal only with transferring the visibility/availability of objects into or out of a context. They don’t say anything about the semantics of the transfer. For example they say nothing about whether the source context still references, manages, or owns the object after it has been transferred. More on that in a minute. First the patterns:

2.1 Link

Move an object from source to target. Install in target data structure.

 

2.2 Unlink

Remove an object from a target data structure and return to source.

2.3 Swap/exchange

Replace an object in the target context with a new object. Return the old one.

A new object is sent and installed in the receiver and the old object is returned.  This mechanism can be used to implement atomic state update in O(1) time, even when the object is large or complex (such as a whole signal flow graph). It is related to using a double buffer or pointer-flip to quickly switch states.

Additional semantics for object transfer

It is often useful to overlay additional semantics on these object transfer operations. Potential semantics for object transfer include:

  • Transferring between contexts transfers ownership. Objects are only ever accessed from one context at a time. The context that sees the object owns it and has exclusive access to it. Such objects need not be thread-safe.
  • Transfer implies transfer of write access. Objects are only ever mutable in one context. Objects support single-writer multiple-reader semantics.
  • Transfer implies “publication” of an immutable object. Objects are only mutable prior to publication to other contexts. Const methods need to be thread-safe.
  • Transfer implies “publication” of a single-writer (possibly multiple-reader) object. Objects are only ever mutable in the source context. Objects support single-writer multiple-reader semantics.
  • Transfer makes a thread-safe object visible to another context. Thread-safe objects are “Monitors” or multiple-writer safe. Their state and operations are protected by a mutex or other mechanism. They can be used concurrently from multiple contexts.

Semantics that enforce immutability, single-context access or a single-writer semantics can be useful because they can be used to avoid locks and blocking.

Note that things can get more interesting if you want to support these semantics and also allow the receiver to be destroyed prior to destroying the queue.

3. Object transfer patterns: allocation and deallocation in source context

In non-garbage-collected languages such as C and C++ you need to manage object lifetimes by explicitly allocating and deleting objects. Often this is handled by having strong ownership semantics: a specific object or process is always responsible for allocating and/or destroying particular objects.

In some contexts (such as a real-time audio callback) it is not appropriate to invoke the memory allocator at all, therefore allocation must be performed in another context.

The following allocation/deallocation pattern deals with these scenarios by allocating the new object in the source context when performing a link operation, and deallocating the result in the source context after performing an unlink operation.

3.1 Allocate in source context, link in target

3.2 Unlink and return to in source context for destruction

This pattern is useful when:

  • Only the left hand side context can allocate memory (due to non-blocking requirements in the right hand context), AND/OR
  • Allocating the object is time consuming or unbounded in time and the right hand context has strict time constraints.

The above example shows literal allocation and destruction of the object, but other variations are possible including:

  • The object is allocated from and released to a pool that is local to the source context.
  • Instead of allocating/destroying, a reference count is incremented/decremented in the source context on behalf of the destination. This means that the destination context always owns a reference count to the object, but it ensures that the final destruction of the object is always performed in the source context where it is safe to call the memory allocator or access other destruction-time resources.

4. Alternative allocation patterns

4.1 Context local storage/allocator/pool

Create new object and install in target context using destination-context-local storage:

This pattern is useful when objects are always (or mostly) used locally in the receiving context, and allocation and deallocation can be implemented efficiently (e.g. using a simple freelist or O(1) allocator). There may be cache advantages to keeping objects local to a context. This pattern may also result in less complex memory management code in the receiving context if the objects are allocated and/or deallocated as the result of a number of different events, messages or state transitions.

4.2 Closed-loop object reuse

Pre-allocating all needed resources is a common strategy for real-time memory management.  Streaming systems usually have a known maximum number of buffers in-flight for a single stream. In these situations closed-loop circular buffer exchange can be used. A fixed number of objects is allocated and then cycled between contexts:

Conclusion

That’s it for the basic patterns.

As I said at the outset, this isn’t the whole picture. There are more involved patterns such as prioritized work lists, asynchronous state mirroring,  and sub-object reference counting. You can implement transactions by combining messages so that a set of state transitions is always executed atomically with respect to the receiver. You might want to limit the amount of  allocated memory by only having a certain number of  messages/objects in-flight at any time. Hopefully I’ll get around to writing about these mechanisms in future posts.

I hope these basic patterns and my explanations give you a new way to think about implementing asynchronous message passing communication in concurrent applications. If you know of any other patterns, references or links on this topic please share them in the comments.

Share/Bookmark

Real-time audio programming 101: time waits for nothing

“The audio processing thread is stalling because the client’s implementation of some XAudio2 callback is doing things that can block the thread, such as accessing the disk, synchronizing with other threads, or calling other functions that may block. Such tasks should be performed by a lower-priority background thread that the callback can signal.”

– Microsoft XAudio2 documentation

“Your IOProc is making blocking calls to the HAL and it is calling NSLog which allocates memory and blocks in fun and unexpected ways. You absolutely cannot be making these calls from inside your IOProc. You also cannot be making calls to any ObjC or CF objects from inside your IOProc. Doing any of these will eventually cause glitching.”

Jeff Moore, Apple Computer, on the CoreAudio mailing list

“The code in the supplied function must be suitable for real-time execution. That means that it cannot call functions that might block for a long time. This includes malloc, free, printf, pthread_mutex_lock, sleep, wait, poll, select, pthread_join, pthread_cond_wait, etc, etc.”

JACK audio API documentation for jack_set_process_callback()

As you may have gathered from the quotes above, writing real-time audio software for general purpose operating systems requires adherence to principles that may not be obvious if you’re used to writing “normal” non real-time code. Some of these principles apply to all real-time programming, while others are specific to getting stable real-time audio behavior on systems that are not specifically designed or configured for real-time operation (i.e. most general purpose operating systems and kernels).

These principles are not platform-specific. The ideas in this post apply equally to real-time audio programming on Windows, Mac OS X, iOS, and Linux using any number of APIs including JACK, ASIO, ALSA, CoreAudio AUHAL, RemoteIO, WASAPI, or portable APIs such as SDL, PortAudio and RTAudio.

I’ll get on to the programming specifics in a moment, but first let’s review a couple of basic facts: (1) You do not want your software’s audio to glitch, and (2) real-time waits for nothing.

You do not want your software’s audio to glitch

You don’t want your software’s audio to glitch. Especially if your software is going to be used to perform to a stadium full of fans, or to watch a movie in a home theater, or to listen to a symphony in your car while you’re driving down the freeway, or anywhere really. It’s so fundamental I’ll say it again: you do not want your audio to glitch. Period.

Girl Talk

That’s Girl Talk performing using a laptop running my AudioMulch software up there (Photo by fromthephotopit on Flickr). I don’t want AudioMulch’s audio to glitch. Girl Talk doesn’t want his audio to glitch either.

Real-time waits for nothing

Digital audio works by playing a constant stream of audio samples (numbers) to the digital to analog converter (DAC) of your sound card or audio interface. The samples are played out at a constant rate known as the sampling rate. For a CD player the sampling rate is 44100Hz, that’s 44100 stereo sample frames every second. Every second at the same rate. Not faster, not slower. 44100 sample frames every second. If your sound card doesn’t have the next sample when the DAC needs it, your audio will glitch.

On general purpose operating systems such as iOS, Android, Windows, Mac OS X or Linux your software usually won’t be delivering individual samples to the DAC, it will be providing buffers of samples to the driver or an intermediate OS layer. For example it might process buffers of 256 samples at a rate of 179.26Hz (that’s 44100 / 256). The lower levels of the system then feed the individual samples from each buffer to the DAC at 44100Hz.

It isn’t always the case, but for the purpose of this post I’ll assume that the audio API periodically requests a buffer of samples by calling a callback function in your code. In the above example your callback would have to compute each and every buffer in less than 5.8 milliseconds. No matter how your code is invoked your software has to provide those samples within 5.8ms. Each and every buffer. No exceptions. Real-time does not wait for latecomers.

Sidebar: real-world buffer sizes and latency values

To put things into context, just to make sure we’re all on the same page here: To many users today 5ms is considered a large buffer size. ~1ms buffers (say 64 samples at 44100Hz) would be considered pretty good but not necessarily ideal. Applications where low latency is especially important are (1) interactive audio systems (such as musical instruments or DJ tools) where the UI needs to be responsive to the performer, and (2) real-time audio effects, where the system needs to process analog input (say from a guitar) and output the processed signal without noticeable delay. What is considered “noticeable delay” depends on a number of factors, but an end-to-end system response of less than 8ms including all system latencies (not just audio latency) is not an unreasonable ballpark number for interactive systems. For live audio effects processing many users would prefer latency to be much lower than this.

For the purposes of this post I consider buffer sizes somewhere in the 1-5ms range to be normal and achievable today on most computers running Windows with WDM, WASAPI or ASIO drivers, with Mac OS X’s native CoreAudio system and with ALSA or JACK on Linux. Some audio hardware now supports buffer sizes down to 16 samples (about 1/3 of a millisecond at 44.1kHz sampling rate) or even lower. I assume that you want to write low-latency audio software for one or more of these platforms. Even if you’re targeting a platform that only supports high-latency, like Android’s ~50ms or Windows’ legacy WMME API’s ~30ms the same principles apply. You don’t want to glitch.

Sources of glitches

We’ve established that you don’t want to glitch, and your buffer period is around 5ms or less. Your code has to deliver each and every buffer of audio in a time shorter than one buffer period.

All sources of audio glitches within your code boil down to doing something that takes longer than the buffer period. This could be due to “inefficient code” that’s too slow to run in real-time. That’s not the main thing we’re interested in here though. I assume that your code is efficient enough to run in real-time, or that you can profile and optimise it so it’s fast enough. If not, the internet is full of resources to help you write faster code.

The main problems I’m concerned with here are with code that runs with unpredictable or un-bounded execution time. That is, you’re unable to predict in advance how long a function or algorithm will take to complete. Perhaps this is because the algorithm you chose isn’t appropriate, or perhaps it’s because you don’t understand the temporal behavior of the code you’re calling. Whatever the cause, the result is the same: sooner or later your code will take longer than the buffer period and your audio will glitch.

Therefore, we can state the cardinal rule of real-time audio programming simply as follows:

If you don’t know how long it will take, don’t do it.

There are a number of things your program could do that fall into the general category of “unbounded time operations”. Many are mentioned in the quotes at the start of this post. I explore them in more detail below. As you might guess, some are more obvious than others.

“Blocking”

You may hear someone say “don’t do anything that blocks the audio callback thread.” Used like this it’s a general term. Doing anything that makes your audio code wait for something else in the system would be blocking. This could be acquiring a mutex, waiting for a resource such as a semaphore, waiting for some other thread or process to do something, waiting for data to be read from disk, waiting for a network socket. It’s pretty clear that some of this waiting could take an indeterminate, or even indefinite amount of time, and certainly longer than a few milliseconds. I discuss some of these specific types of blocking in more detail below.

Keep in mind that not only do you want to avoid directly writing code that blocks, it is critical that you avoid calling 3rd-party or operating system code that could block internally.

Poor worst-case complexity algorithms

You’ve written every line of code in your audio callback yourself to avoid blocking. No calls to any system or 3rd-party code that could block. Even so, you might still have a problem: software efficiency is often analysed in terms of average-case (amortized) complexity. For example, in many applications, an algorithm that runs super-fast 99.9% of the time but every now and then takes 1000 times as long might still be considered “the fastest algorithm available.” This is often referred to as “optimising for throughput instead of latency.” But remember: real-time waits for nothing, certainly not for that 1000-times-normal processing spike that happens every now and then. If you do something like that in your audio callback, you may get a glitch. For this reason, you should always consider the worst-case execution time of your code.

A simple example would be zeroing a delay line to reset it (say using a for-loop to zero every element, or perhaps by calling memset). You might think that using memset to clear a buffer of memory is fast, but if the delay line is large the memset call is going to take quite a long time — and if the time taken is longer than you have available you’ll glitch. It’s usually better to use an algorithm that spreads the load across many samples/callbacks, even if it ends up burning a few more cycles overall. Of course if these spikes in processor usage are small, and you know that they’ll be bounded (e.g. you know the maximum size of the delay line you’ll need to zero), you might be OK. But if your code is full of these little occasional spikes, you’d better hope you can predict how they all interact and sum together. If they all occur at the same time you could still get a glitch, and you don’t want that.

Another thing to keep in mind here is that many operating system and library functions are implemented using average-case optimised algorithms. In C++ many STL container methods fall in to this category (more on that below). General purpose memory allocation algorithms and garbage collectors also have unpredictable time behaviour, even if they don’t use locks.

Locking

It’s difficult avoid concurrency when you have a GUI or text interface, network or disk i/o and a real-time hardware-driven audio callback. Assuming that your GUI is somehow controlling the audio process you’ll need to communicate between the GUI thread and the audio callback. Perhaps you’ll want to display graphics that reflect the state of audio processing (level meters for example). Similar requirements may arise if you need to control audio via a network socket or MIDI.

The first thing you might think of is “I need a lock or mutex” to protect concurrent access to data shared between the GUI and the audio thread. This is a common response. I remember having it too. Actually when I first went looking for a lock, there wasn’t one available. On Mac OS 7, with the SndPlayDoubleBuffer API there was no OS mechanism to implement a lock since the callback could occur at interrupt time. Of course modern operating systems do provide locks (mutexes) to protect against concurrent access to shared state. You should not use them within an audio callback though. Here are three reasons why:

#1 reason to not use locks: priority inversion

Let’s say your GUI thread is holding a shared lock when the audio callback runs. In order for your audio callback to return the buffer on time it first needs to wait for your GUI thread to release the lock. Your GUI thread will be running with a much lower priority than the audio thread, so it could be interrupted by pretty much any other process on the system, and the callback will have to first wait for this other process, and then the GUI thread to finish and release the lock before the audio callback can finish computing the buffer — even though the audio thread may have the highest priority on the system. This is called priority inversion.

Real-time operating systems implement special mechanisms to avoid priority inversion. For example by temporarily elevating the priority of the lock holder to the priority of the highest thread waiting for the lock. On Linux this is available by using a patched kernel with the the RT preempt patch. But if you want your code to be portable to all general purpose operating systems, then you can’t rely on real-time OS features like priority inheritance protocols. (Update: On Linux, user-space priority inheritance mutexes (PTHREAD_PRIO_INHERIT) have been available since kernel version 2.6.18 together with Glibc 2.5. Released September 19, 2006. Used in Debian 4.0 etch, 8 April 2007. Thanks to Helge for pointing this out in the comments.)

Keep in mind that any code in your audio callback that waits for something to happen in another thread could be subject to priority inversion. Rolling your own “spin lock” that busy-waits polling for something to complete in another thread, in addition to being inefficient, will have the same priority inversion problem if the thread you’re waiting for has lower priority and gets preempted by another thread in the system.

#2 reason to not use locks: risk of accidentally calling code with unbounded execution time

I know you’re not going to use a lock, because I just explained that priority inversion will mess with you even if your code simply locks a mutex to set one flag and then releases the lock:

mutex.lock();
flag = true; // subject to priority inversion
mutex.unlock();

But if you’re still contemplating calling code that acquires a lock, consider this: the audio callback will have to wait for all of the code that is protected by the lock (the “critical section”) to complete before the audio callback can proceed. Effectively, in addition to the thread context switching costs, you’re executing all that code sequentially with your audio callback. Do you know how long that will take? in all cases? Remember we’re talking about worst-case time here, not average-case. Any code path inside a critical section that’s shared with the real-time audio thread would have to follow all of the rules we’re outlining here. That’s asking for a lot of discipline from you and your fellow developers. It would be easy for bugs to creep in. In C++ you wouldn’t want to do this for example:

mutex.lock();
my_data_vector.push_back( 10 ); // could allocate memory and copy mucho data
mutex.unlock();

If my_data_vector is a std::vector, calling push_back() when the vector’s internal storage is full will cause memory to be allocated and all existing elements to be copied into the new memory. That’s obviously going to cause a processing time spike. Most non-real-time code behaves like this some of the time. It’s easy for simple-looking code to have non-deterministic time behaviour. You don’t want it creeping into your critical sections.

#3 reason to not use locks: justified scheduler paranoia

“you’re definitely opening things up to a world of hurt when the system gets stressed”

Jeff Moore to James McCartney on the CoreAudio list in 2001

Priority inversion and unbounded execution time inside critical sections aren’t the only reasons to avoid locks and other concurrency primitives. In the case of proprietary operating systems, few people know exactly how the schedulers are implemented. No matter what the OS, scheduler implementations may change with each OS release. These general purpose operating system schedulers are not required or guaranteed to exhibit real-time behavior. They’re not certified for use in avionics systems or medical equipment. There are no governments or judiciaries to hold their real-timeness to account.

My general position on this is that you should avoid any kind of interaction with the OS thread scheduler. Avoid calling any synchronisation functions in your audio callback. Schedulers employ complex and diverse algorithms and you don’t want to give them extra reasons to de-schedule your real-time audio thread.

Of course you can’t escape depending on the scheduler to invoke your audio callback on time, and with high priority. You might also consider a few other hard-core scheduler interactions, such as signalling condition variables or semaphores to wake other threads. Some low-level audio libraries such as JACK or CoreAudio use these techniques internally, but you need to be sure you know what you’re doing, that you understand your thread priorities and the exact scheduler behavior on each target operating system (and OS kernel version). Don’t extrapolate or make assumptions. For example, last I checked the pthreads condition variable implementation on Windows employed a mutex internally to give correct POSIX semantics — that’s definitely not something you want to get involved with (you could perhaps use the native Windows SetEvent API though).

Side note: trylocks

One related option that may be open to you is to use a non-blocking “try-lock” function that simply returns a failure code instead of blocking if the lock can’t be acquired (pthread_mutex_trylock() on POSIX, TryEnterCriticalSection() on Windows). The downside is that since acquisition of the lock is not guaranteed, you can’t use it to protect anything that you must access at every callback. You’re gambling that you’ll be able to acquire the lock at least some of the time — although depending on the behavior of the other lockers, in the worst case, your code may never manage to acquire the lock.

Memory allocation

I’ve touched on this point above, but it’s worth re-iterating: you should not allocate memory in your audio callback. For example you shouldn’t call malloc() or free() or C++’s new or delete, or any operating-system specific memory allocator functions, or any routine that might call these functions. The three obvious reasons are:

  • The memory allocator may use a lock to protect some data it shares between threads. Aside from priority inversion, trying to lock a mutex that’s potentially contended by every other thread that allocates memory is clearly not a good idea.
  • The memory allocator may have to ask the OS for more memory. The OS may also have it’s own locks, or worse, it may decide to page some memory to/from disk and make you wait while it happens.
  • The memory allocator may use algorithms that take unpredictable amounts of time to decide how to allocate a block — and you know you don’t want that.

Some obvious solutions are: 

  • Pre-allocate all your data.
  • Only perform dynamic allocation in a non-real-time thread where it isn’t time-critical.
  • Pre-allocate a big chunk of memory and implement your own deterministic dynamic allocator that’s only invoked from the audio callback (and hence doesn’t need locks).

For example, SuperCollider has an implementation of Doug Lea’s memory allocator algorithm that is only used in the audio callback. The current version of AudioMulch uses per-thread memory pools for dynamic allocation. For tasks with unbounded execution time such as plugin loading, AudioMulch performs them in the UI thread and then sends the results to the audio callback when they’re ready — sometimes it’s OK to make the user wait, so long as the audio callback doesn’t have to.

Invisible things: garbage collection, page faults

Even if you avoid all of the above in your own code, there are still a few environmental issues. If you’re using a garbage collected language such as Java or C# and the GC kicks in while your audio thread is running you will probably be in trouble unless you can disable GC or use an environment with deterministic real-time GC. This isn’t my area of expertise so I won’t say more about this other than that there are Java implementations with real-time GC. Otherwise you may have to settle for higher latency and additional intermediate audio buffering.

Even in a non-garbage-collected language such as C, the OS virtual memory system could page out memory that you reference in your audio callback. This would cause the thread to stall waiting for memory to be paged in from disk. I’ve never had problems with this myself — usually if the memory pages are kept hot by accessing them regularly the OS will keep them in physical RAM. But if your program is pushing the limits of available RAM, uses data that’s only referenced infrequently, or expect other memory-intensive tasks to be going on in parallel to your audio processing, you may want to investigate locking your real-time data into RAM (using operating system specific mechanisms such as mlock()/munlock() on OS X and Linux, VirtualLock()/VirtualUnlock() on Windows).

Waiting for hardware or other “external” events

You’re probably not writing code that directly waits for hardware. But disk i/o often has to wait for the hard disk head to seek to the right position, and that can take a while (averages ~8ms for consumer disks). This means performing file i/o from an audio callback is a no-go, even ignoring the fact that file i/o may lock a mutex while accessing file system data structures, allocate memory, or otherwise use poor worst-case performing algorithms. Similar rules apply to other tasks like syncing with the vertical interrupt on the graphics card or performing network i/o. As I said earlier: If you don’t know how long it will take, don’t do it.

Combinations of all of the above: code that doesn’t make real-time guarantees

If you use a closed-source operating system you probably don’t know what every API call does under the hood, but it’s a safe bet that most general purpose computing code is not written with all of the above real-time considerations in mind. Even if you do have access to your operating system source code, unless the system guarantees real-time behavior, it’s hard to say for certain that code that doesn’t use a lock today won’t use one in the future, or that an implementation won’t change to one with better average-case performance but worse worst-case performance. Memory paging issues are almost unavoidable since you can’t lock every OS-managed page in RAM without disabling virtual memory entirely.

For these reasons I suggest that you assume that all system API functions could allocate memory, use locks or employ algorithms with poor worst-case time behavior. Some may perform disk i/o either directly, or indirectly by triggering page faults when there is a pressure on the memory subsystem. Many functions will allocate memory — even just temporarily as scratch-space.

Paranoia is justifiable, since you really don’t want to glitch. I generally avoid OS API functions with very few exceptions and only when absolutely necessary. The only exceptions I can think of off the top of my head are QueryPerformanceCounter() on Windows, and mach_absolute_time() on OS X. Unfortunately I’m not 100% confident of the real-time behaviour of either.

Similar paranoia should be applied to 3rd-party libraries such as those for soundfile i/o. Unless code claims to have non-blocking real-time behavior, don’t assume that it does.

On OS X, Apple advises: don’t call the BSD layer from your IOProc. Objective C code (since the objective-C dispatcher may use locks) and CoreFrameworks are also out. Microsoft don’t typically document which, if any, of their APIs have real-time semantics.

Lands of confusion

The ideas in this post are not secret knowledge although they are in-part folklore. They come up periodically on all the audio development mailing lists I’ve ever subscribed to and I know they come up elsewhere. The thing is, avoiding locks goes against best-practice advice in “normal” concurrent programming. As a result, sometimes things get confused. Here are two examples of bad advice I’ve found online that illustrate the confusion:

*** DO NOT TAKE THE FOLLOWING QUOTED ADVICE ***

“you should use a mutex or other synchronization mechanism to control access to any variables shared between the application and the callback handler”

Open SL ES audio API documentation for Android

Well, it is true that you should use a synchronisation mechanism, just not a mutex. My thoughts on the above quote are on the public record here.

“Both read and write locks the buffer so it the pointers of the buffer will be maintained consistent”

JACK Wiki description of using ringbuffers

(The JACK guys do know better than this, I’ve reported the error, it’s probably fixed by now).

If these snippets were referring to normal multi-threaded code then they would be absolutely correct. Except in extremely rare circumstances, you should not access shared data without protecting it with a mutex lock. However, for the reasons I’ve explained above, in real-time audio code, on general-purpose operating systems, using a mutex is not advisable. It is poor practice and widely frowned upon. The solutions that I advocate involve certain lock-free and atomic techniques that I mention below and that I hope to describe in more detail in future posts. [Update: but see the comments for other views.]

In summary

Boiling down the above discussion into a few rules of thumb for code that executes in a real-time audio callback:

  • Don’t allocate or deallocate memory
  • Don’t lock a mutex
  • Don’t read or write to the filesystem or otherwise perform i/o. (In case there’s any doubt, this includes things like calling printf or NSLog, or GUI APIs.)
  • Don’t call OS functions that may block waiting for something
  • Don’t execute any code that has unpredictable or poor worst-case timing behavior
  • Don’t call any code that does or may do any of the above
  • Don’t call any code that you don’t trust to follow these rules
  • On Apple operating systems follow Apple’s guidelines

There are a few things you should do where possible:

  • Do use algorithms with good worst-case time complexity (ideally O(1) wost-case)
  • Do amortize computation across many audio samples to smooth out CPU usage rather than using “bursty” algorithms that occasionally have long processing times
  • Do pre-allocate or pre-compute data in a non-time-critical thread
  • Do employ non-shared, audio-callback-only data structures so you don’t need to think about sharing, concurrency and locks

Just remember: time waits for nothing and you don’t want to glitch.

Coda

But wait you say, how am I supposed to communicate between a GUI and the audio callback if I can’t do these things? How can I do file or network i/o? There are a few options, which I will explain in more detail in future posts, but for now I’ll just say that the best practice is to use lock-free FIFO queues to communicate commands and data between real-time and non-real-time contexts (see my article about SuperCollider server internals for some ideas, PortAudio has an implementation of a lock-free queue you could use as a basis for your own queuing infrastructure). Other options include other types of non-blocking lock-free data structures, atomic data access (requires great care), or trylocks as I mentioned above. For another resource, these techniques are touched upon in the notes for a workshop that Roger Dannenberg and I ran in 2005 on real-time design patterns for computer music.

I’m not exactly sure where I picked up all these ideas. At a minimum I need to thank Phil Burk, Roger Dannenberg, Paul Davis and James McCartney for sharing their insights on various mailing lists over the years. The quotes above reveal that Jeff Moore has also been banging on about these issues on the CoreAudio mailing list for at least a decade.

I started writing this post a few weeks ago, but just yesterday I read Tim Blechmann’s Masters thesis about his Supernova multi-core audio engine. It rehearses many of the ideas discussed here, and from there I learnt that the Linux RT Preempt patch implements priority inheritance as I mentioned above. Tim’s thesis is definitely worth a read for another angle on this material, and a lot of ideas on multi-core audio too.

Finally, if you made it to here, please, if I got something wrong, you think I’ve missed something, or you know of other activities to avoid in an audio callback please post in the comments. Thanks for reading.

Lightly updated for clarity 13 August, 2011.