Automatic Musical Accompaniment Using Finite State Machines

Synopsis

The aim of this project is to automatically generate musical accompaniments for a given melodic sequence. Finite state machines are used frequently in speech recognition systems to model sequential symbolic data and mappings between symbol sequences.

 

 

In an automatic speech recognition system, a finite state transducer, which maps an input sequence to an ouptut sequence, is used to map sequences of phonemes to words. An n-gram, represented as a finite state automaton, is used as a language model.

In our approach, we put the problem of generating harmonic accompaniment in the speech recognition framework, treating melody notes as phonemes and chords as words, using finite state machines for each model. For an input melody sequence and alphabet of possible chords, we estimate the most likely chord sequence. 

Code

The source code for this project can be found on GitHub.

Applications

Harmonic Accompaniment Generation App

The accompaniment generation approach was used as the core of an application that generates harmonic accompaniment to melodies created using a simple step sequencer.

 

The Harmonically Ecosystemic Machine; Sonic Space No. 7

The Harmonically Ecosystemic Machine; Sonic Space No. 7 is an interactive music performance system building on the combined work of the three contributing the artist/researchers, Michael Musick, Jonathan Forsyth, and Rachel Bittner, combining Michael Musick's work with sonic ecosystems and the automatic accompaniment approach described above.  

This piece invites participants to contribute musically by playing the instruments placed throughout the active space. In doing so, they join the system as collaborators and interrelated musical agents. In essence this creates a chamber work, in both senses of the word: the piece becomes an improvisation between the system and participants, and a work that activates the entire physical chamber it is installed within.

An excerpt can be heard here.