Aurora is a game my friend Grace led for VGDev about managing a space colony. You play as the Aurora colony’s AI overseer, and your mission is to research the local alien ruins and unlock their secrets. Check it out! I think it’s tons of fun. My main contribution to Aurora was the code that generates its music. The game’s open source, so you can look at the music code if you’d like. If you’d prefer a written explanation of what it does, read on; this post will explain how Aurora’s music generation works.

Rhythm

The first step is selecting a time signature. A time signature says how many beats there are in a measure and how they’re split up. For example, 4/4 means each measure is four quarter notes, while 7/8 means each measure is seven eighth notes. An important thing to keep in mind is that time signatures can’t quite be simplified the way fractions can; 3/4 and 6/8 sound similar but are still very different beasts.

At the start of each new section of music, a new time signature is chosen. It’s always either 5/8, 6/8, 7/8, 8/8, 9/8, 11/8, or 13/8. Most of these are pretty unusual—almost every song you’ll hear on the radio is in 4/4, for example. This is a game about an alien world, so it’s only natural the music would favor time signatures you don’t typically hear in pop music. (It helps that I’m a big fan of odd time signatures.)

Drums

Once we’ve determined how many eighth notes there are in a measure, we can go about splitting them up into groups of two and three. I think I got this idea from an Andrew Huang video about odd time signatures. The idea is that the smaller groupings make the whole more intelligible for both the musicians playing the music and the people listening to it. It’s easier to count 3 + 3 + 3 + 2 + 2 than it is to count 13.

Groupings are generated by shaving off chunks of two or three from the rest of the measure until there’s nothing left to shave:

// This isn't the actual code used in Aurora, but it gets the idea across.
function generateMeasureGroupings(beats: number): number[] {
    const groups: number[] = [];
    const possibleGroups = [2, 3];
    while (beats > Math.min(...possibleGroups)) {
        // exclude groups we don't have room for
        const validGroups = possibleGroups.filter(num => num >= beats);
        const currentGroup = Random.fromArray(validGroups);
        groups.push(currentGroup);
        beats -= currentGroup;
    }
    // make sure we include every beat in the measure
    if (beats > 0) {
        groups.push(beats);
    }
    return subdivision;
}

This algorithm has a chance of producing a grouping with a trailing group of one, but it sounds fine, so I’m willing to consider that a feature.

These groupings make generating an acceptable drum groove pretty easy. The measure has been split up into smaller groups, and these groups can be considered more or less independently. A kick or snare plays on the first beat of each grouping; hi-hats play on the rest. Groups alternate between starting with kicks and starting with snares to create a sort of pseudo-backbeat feel.

Scales and Chords

The harmony is only marginally more complicated than the rhythm section. Musical scales are stored as bit vectors using the method described in Ian Ring’s A Study of Scales. This representation has a lot of nice properties. For example, checking if a scale S has a note n semitones above the root is as simple as S & (1 << n) != 0. Aurora has a lot of similar utility methods for manipulating and analyzing scales, actually. Figuring out the right way to express some musical property in terms of these bit vectors is a fun mental exercise even if most of the resulting utility methods don’t get used.

Each section of music picks a random scale from the six non-diminished modes of the major scale. (That is, it picks any mode other than Locrian.) Once we have a scale, we can build chords off of it by stacking thirds. In layman’s terms, we can make nice-sounding chords by taking three or four alternating notes. For example, if our scale is C D E F G A B, CEG and DFA (C major and D minor) would both sound pretty nice. The reasons why this works are beyond the scope of this post, but the Wikipedia article on tertian harmony is a good starting point.

We have a set of seven different chords. The final step is determining which chords to play and in what order. The idea of functional harmony—chords having different “functions” or jobs within a key—is a good starting point. In general, tonic chords feel stable and at rest, while dominant chords are tense and “want” to resolve to stable chords. Subdominant chords set up dominant chords. The eternal cycle of tonic to subdominant to dominant to tonic again sounds nice enough to form the basis of our generative chord progressions. All we need is a table mapping functions to chords like this one:

Function Chords
Tonic I, VI
Subdominant II, IV
Dominant V, VII
Ambiguous III

Note that the III chord (E minor in the key of C major) is listed as “ambiguous”. The III chord is kind of difficult to assign a concrete function to. 12tone made a video about the III chord if you’d like to learn more. Basically, the III chord can have either dominant or tonic function depending on what kind of chord it comes after.

The last piece of the puzzle in place, we can generate chord progressions pretty easily. Each progression is four chords long and starts with the I chord to help establish the key. The next three chords can fit one of a couple different patterns, but the final chord progression always fits the pattern of tonic then subdominant then dominant.

The four chord loop plays twice accompanied by the drums, then the drums play on their own for a measure to make the transition between scales less jarring. After that, all the musical parameters get randomized again. Repeat ad infinitum. Here’s what it sounds like:

May Lawver · Aurora Soundtrack

Conclusion

I’m proud of how this all turned out, but there are a lot of things like melody generation and more complicated synths that I hoped to implement but didn’t get around to. If I were to start over with this project, I’d probably use a library like Tone.js to handle the sound synthesis. Letting a library handle that would allow me to focus on the musical side of things more. Still, I learned a lot working on this, and it’s left me with a lot of ideas for my next foray into generative audio. It was a learning experience and sounds pretty nice, and that’s all I can really ask for.