top of page

Procedural Music

Procedural music has been defined as “composition that evolves in real time according to a specific set of rules or control logics” (Collins, 2009 p13). In comparison to a linear piece of music, procedural music dynamically responds to changing variables or events within a system and is often grouped with the term ‘generative music’ (Plut & Pasquier, 2019). Within the context of video games, audio events that would interface with such a composition can be split between ‘interactive’ and ‘adaptive’ (Collins, 2009). With interactive events, control is split between the game engine affecting the rules for playback (such as which sounds to associate with which event), and the player affecting playback timings (Collins, 2009). Adaptive events are unaffected by the player’s direct action and are “cued by the game’s engine based on in-game parameters” (Collins, 2009 p6); these could include time of day, location, or similar. To remain cohesive and engaging, music must not only be reactive to the inputs these events provide, but also their absence; long gameplay sessions in areas of a game with a looping piece of music can lead to listening fatigue and player frustration (Summers, 2018). Procedural music presents the opportunity to be reactive to both interactive and adaptive events, while sustaining interest over time to address the increasing size and gameplay length of AAA games. 

 

To make a distinction between the degrees of procedural process in music, Wooler et al. (R. Wooller et al., 2005) identify ‘generative’ and ‘transformational’ algorithms. Generative algorithms not only define how each note and sequence is played but include additional rules defining the synthesis of each musical element, contributing to both development time and data size. Transformational algorithms only concern the overall structure of the music, such as adding and removing instrument parts or modifying phrases. Most games use transformational algorithms (Collins, 2009) due to medium-specific considerations for procedural music, and their more repetitive nature has also been proven to influence ‘liking’ (Hargreaves, 1984).  

 

Sound in games must fulfil functions that match the action on screen, such as anticipating action, creating emotion, signalling reward, and more (Collins, 2008) through cues that relate to each other, the gameplay level, narrative, run-time parameters, and more (Collins, 2009). These translate to strict control logics that result in rigid generative algorithms that prioritise predictability and cohesion over creativity and variation to ensure adequate functionality. Generative algorithms also require an increased processing budget towards audio (Plans & Morelli, 2012), and it has been suggested that audiences have grown accustomed to samples and orchestration that generative techniques are not yet able to match in quality, though this may change with the introduction of novel machine-learning-based techniques (Plut & Pasquier, 2019). 

 

Nevertheless, the possibilities offered by generative algorithms have captured imaginations and been leveraged in certain games. Generative algorithms “fit abstract game narratives as opposed to set narratives with traditional plot points” (Plans & Morelli, 2012 p3) and novel approaches such as the tracking of experience-based parameters ‘frustration’, ‘challenge’, and ‘fun’ have been shown to create a more relevant emotional connection to the player (Plans & Morelli, 2012). Despite increased computational costs, generative algorithms require only text-based storage, offering “significant memory and storage savings” (Plans & Morelli, 2012 p1). 

Procedural Music Types

Spore is “a groundbreaking evolution simulation” game developing life “from its single-celled origins to its spread as a space-faring civilization”.  Some characteristic defining gameplay elements include: choosing to be friendly or aggressive to neighbouring species; progressing through “religion, economics, or brute force” (Gladstone, D., 2008).

​

As the basis of the game allowed a wide amount of dynamic elements and choices, the team wanted to reflect these various life progressions of the character through music.  For example the synthesis of different characters with differing characteristics, such as propensity towards aggression, before factoring in gameplay playing styles adds on top of some of the gameplay elements mentioned in the previous paragraph.

As discussed by the designers Jolly and McLeran in their 2008 talk, in order to approach this they chose to use a highly synthesised model, using designed, controlled randomised generation within primarily Pure Data, and also Max MSP. Through their process, they began creating a rhythm synthesiser, which developed into a musical rhythm synthesiser which then both were layered with parameters and presets to create dynamic musical soundscapes. Events and states would then call from the game which would adjust which preset called, added or subtracted DSPs and altered, added or removed the rhythm section and pacing of the soundscape.

 

This would be an example of Wooller’s (2005) description of Generative music as it not only defines how each note and sequence is played but includes additional rules defining the synthesis of each musical element, as mentioned earlier. Here, the framework of controls and algorithms is all generated from computerised logic with tone and rhythm input, and the triggers come from the dynamic game elements.

​

In contrast to this would be No Man’s Sky’s approach to procedural generation which would be defined by Wooler et. al (2005) as a transformational approach. The scale of No Man’s Sky is huge as Rambus Press (2016) notes defining it as a game “built around a procedurally generated deterministic open universe that contains a staggering 18.4 quintillion planets.” Clearly this is a sizeable challenge to create dynamic music for to cover this variety of variation.

​

Instead of taking a generative approach, when the team approached the musical system, they recruited a musical group called 65 Days of Static to compose a procedural soundscape for the game. Through the team’s decided process by the music leads Wolinski and Shrewsbury, they began the process by writing a full 50-minute composed album. They then went on to define the musical purpose and points of tension within the game before breaking down the compositions into “pools and libraries of sounds” (Shrewsbury, J. and Wolinski, P., 2016.). These were categorised into characteristic packages per landscape, for example here Soundscape 1’s characteristics world and player states were “Planet”, “Space” and “Wanted”. 3 sound packages were determined to create Soundscape, of which contained a spectrum of up to 3 sound types within. This system was extrapolated across the game universe ( Shrewsbury, J. and Wolinski, P., 2016.)
 

Figure 1 - Image of Soundscape packages  (Jolly, K. and McLeran, A., 2008)

Within their adaptive music system built into the game, they added the option of the intensity slider. What this created was to add it layers of sound packages on top of the currently Soundscape, in a predetermined additional layering system. By this I mean, that for the situation a series of next cluster to layer on top to add more “intensity” were listed and called as the slider was raised to higher values.

Figure 2 Image of Soundscape package layering (Jolly, K. and McLeran, A., 2008)

Figure 3 Alternate Image of Soundscape package layering (Jolly, K. and McLeran, A., 2008)

Here, the music is aligned with Wooller’s (2005) definition of transformational music as it is concerned the overall structure of the music, such as adding and removing instrument parts or modifying phrases. The music is pre-phrased in a planned randomised way, but then those packages are largely make interactive in the game by swapping out different packages, layering packages, and by volume changes in layers. Here the structure of the music is constantly adjusted but the music itself is only reordering, in comparison to the generative method where the music is recomposing itself as well as reordering the structure.

Procedural Music as the Game.

Toshio Iwai

Toshio Iwai is a Japanese media artist who’s work is based around interactive principles (Brown, 1997). One such example would be his piece Musical Chess - two people sit at opposing sides on a table, taking turns to place balls onto holes in a table. Every 2 minutes the table plays the array of sounds as notated by the ball placement. Thus, the two players work together to create melodies based on the patterns of the balls on the board. (Yuasa, 2002). 

​

​

​

 

 

 

 

 

​

 

 

 

 

 

 

 

 

 

 

 

​

Figure 4a & 4b: Several Examples of Toshio Iwai's Interactive media works (Mutaharu, 2011) and a picture of Musical Chess (Yuasa, 2002). 

​

​This interactive approach naturally fed into video game projects. One of the first games to utilise generative music was Iwai’s Otocky (ASCII Corporation, 1987); A side-scrolling shoot ‘em up in which allows the player to shoot in 8 different directions - each direction causing a quantised musical note to play, meaning that shooting can cause unique melodies to occur while progressing through the levels (Collins, 2009). Thus, incorporating a direct musical element into traditional gameplay.

​

​

​

​

​

​

​

​

​

​

​

​

​​​​​

 

 

 

Figure 5: Otocky gameplay (Ultimate History Of Video Games, 2018).

​

His games put procedural music techniques at the core of gameplay rather than a role to enhance the soundtrack. Iwai believed (Brown, 1997) that games work well as a medium for creating music and self expression without the need to practice and study music in-depth. 

 

SimTunes (Maxis, 1996) also represents this approach. Bugs are used to represent musical instruments and guiding them over coloured blocks laid by the player causes the bugs to play a note, the note depending on the panel and patterns laid ahead of them (Morse, 1999).

​

​​​

​​​​​

​

​

​

​

​

​

​

​

 

 

​

​

​

Figure 6: SimTunes Trailer/Demo (epicreviewer, 2008).

​

Iwai also created the Nintendo DS title Electroplankton (Nintendo, 2005), which has a series of different plankton interact with the world around them, creating musical notes and timbres that vary depending how the player interacts with the various mini-games - using the touch screen and/or microphone to alter paths and patterns the plankton take. As the plankton create music through their various movements and actions, the alteration of the planktons environment and/or intake of sound effects the music that arises from the gameplay. (Harris, 2018)​

​

​

​​​​​​​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

​Figure 7: Electroplankton gameplay (EightBitHD, 2008). In the Tracy mode, the plankton trace the movements of a line of movement, drawn by the DS stylus. The music created by the plankton as they trace the line follows several parameters, such as volume effected by speed, stereo panning via screen location, pitch via line pattern and different sounds/instruments via different plankton.

​​​​​

Iwai was questioned (2006) on criticism regarding the lack of a save function. Iwai stated that this was an intentional choice to dissuade its use as a tool and to promote focus on the immediate, conscious interactions and decisions made by the player. Thus, the core gameplay of Electroplankton is through interacting and playing with the procedural, interactive music systems set up by the different mini games as its own experience rather than a dedicated musical device.

 

While there is an argument that this output is less of a video game experience and more of an example of interactive media in general, this device of interactive music by procedural means is present in several titles today. Super Mario Wonder (Nintendo, 2023) has several levels with music blocks lined, each containing a note which adds up to a melody once hit or walked over. The game Rez (Sega, 2001) and it’s sequel, Rez Infinite (Sega, 2023), expand on the concept of Otocky as a shooting game which effects the music of the game by play - this time with more of a percussive element (Handcircus, 2006).

​

​

​

​

​

​

​

​

​

​

 

 

 

 

​

​

​

​

​​​​​​Figure 8: Rez Infinite trailer (PlayStation Australia, 2023).

​

 

 

 

 

 

​

​

​

​

​

​

​

​

​

References:

 

Brown, A. (1997) Portrait of the Artist as a Young Geek. Available at: https://www.wired.com/1997/05/ff-iwai/ [Accessed: 26 November 2024].

​

Collins, K. (2008) Game Sound : An Introduction to the History, Theory, and Practice of Video Game Music and Sound Design. 1st ed. Cambridge, Mass.: Mit Press.

 

Collins, K. (2009) An Introduction to Procedural Music in Video Games. Contemporary Music Review, 28 (1) February, pp. 5–15.

​

EightBitHD (2013) Electroplankton Gameplay DS. Available at: https://youtu be/ttFoK8BTXM4 [Acessed: 26 November 2024].

 

Electroplankton (2005) Nintendo. Kyoto.

​

Epicreviewer (2008) SimTunes Official Trailer / Demo Vid. Available at: https://youtu be/r-SS8WlREPQ [Accessed: 26 November 2024].

​

Gladstone, D., 2008. Spore: An innovative game with a god complex. PC World, 26(9), p. 30.

​

Handcircus (2006) Ode to “Otocky” (by Toshio Iwai). Available at: https://web.archive.org/web/20070928043028/http://www.handcircus.com/2006/07/26/ode-to-otocky-by-toshio-iwai/ [Accessed: 26, November, 2024].

 

Hargreaves, D. J. (1984) The Effects of Repetition on Liking for Music. Journal of Research in Music Education, 32 (1), pp. 35–47.

​

Harris, C. (2018) Electroplankton. Available at: https://www.ign.com/articles/2006/01/11/electroplankton [Accessed: 26 November 2024].

 

Iwai, T. (2006) C3 Exclusive Interview | Toshio Iwai on Nintendo, Wii… & Musical Fish!. Interview by Adam Riley. Available at: https://www.cubed3.com/news/5724/1/c3-exclusive-interview-toshio-iwai-on-nintendo-wiiand-musical-fish.html [Accessed: 26 November 2024].

​

Jolly, K. and McLeran, A., 2008. Procedural music in SPORE. GDC Vault. Available at: https://gdcvault.com/play/323/Procedural-Music-in [Accessed 26 November 2024].

​

Morse, D. (1999). ‘Pre-Cinema Toys Inspire Multimedia Artist Toshio Iwai’, Animation World, 3(11).

​

Mutaharu (2011) Toshio Iwai’s Works. Available at: https://youtu be/UYeFf5a671o [Accessed: 26 November 2024].

 

Otocky (1987) ASCII Corporation. Tokyo.

 

Plans, D. & Morelli, D. (2012) Experience-Driven Procedural Music Generation for Games. IEEE Transactions on Computational Intelligence and AI in Games, 4 (3) September, pp. 192–198.

​

PlayStation Australia (2023) Rez Infinite | Release Date Trailer | PS VR2. Available at: https://youtu be/RnuwbRA9mhE [Accessed: 26 November 2024].

​

Plut, C. & Pasquier, P. (2019) Generative Music in Video Games: State of the Art, Challenges, and Prospects. Entertainment Computing, 33 December, p. 100337.

​

R. Wooller, Brown, A. R., Miranda, E. R., Berry, R. & Diederich, J. (2005) A Framework for Comparison of Process in Algorithmic Music Systems. Creativity and Cognition, January.

​

Rambus Press, 2016. The algorithms of No Man’s Sky. Rambus Press. Available at: https://www.rambus.com/blogs/the-algorithms-of-no-mans-sky-2/ [Accessed 26 November 2024].

​

Rez (2001) Sega. Tokyo.

​

Rez Infinite (2023) Sega. Tokyo.

​

Shrewsbury, J. and Wolinski, P., 2016. How 65daysofstatic created the No Man's Sky soundtrack. [online] Available at: https://www.youtube.com/watch?v=Y3Jm8hDbPO8 [Accessed 26 November 2024].

​

SimTunes (1996) Maxis. Redwood City.​

​

Summers, T. (2018) Understanding Video Game Music. Cambridge, United Kingdom: New York.

​

Super Mario Wonder (2023) Nintendo. Kyoto.

​

Utlimate History of Video Games (2008) Otocky (1987) - First Generative Music Video Game. Available at: https://youtu be/hduWPrv_Ywo [Accessed: 26 November 2024].

​

Yuasa, I. (2002) Toshio Iwai. Available at: http://www.indexmagazine.com/interviews/toshio_iwai.shtml [Accessed: 26 November 2024].

​

​

musical chess.jpg

© 2024 Leeds Beckett MSc

Sounds still brighter, more ringing and space on a layer above the lead instrument And more forward

bottom of page