MUMT 307 Final Project: Real time analysis and synthesis of vocal performances using biosignal and sound controls

This term, I continued to work on music performance tools involving BioSensor inputs. As a collaboration with Morgan Sutherland, we contructed a system for realtime synthesis embellishments of vocal improvisation.
Our purpose: to create a collaborative synthesis environment for vocal improvisation “without buttons”, and to make performers shared control of processing through arm gestures and characteristics of their vocal output.

With months of planning, the construction came together in time for our class presentation. Watch an example of our performance  to see what this was all about:

Vox Pluralis from Morgan Sutherland on Vimeo.

The Set Up was essentially the following:
Signal Diagram of the MorgraFinn/Vox Ploralis

Components:
Two microphones
Two computers
Two signal sources
Two processing environments
Three synthesis processes

Control Data

The two control signal sources were the biosignals and the audio signals from the mics. The biosensores were sEMG sensors placed on the palmer adbuction muscles on both hands of both performers. Sensor values were passed to the Macbook Pro through a serial port at 256 Hz. A script by Bennett Smith converted the incoming values to UDP. UDP was recieved in HandControl.pd for processing and conversion into values between 0 and 1 with stable end point stats.

Set up on the Macbook Pro with test values being pasted over UDP

The EMG signals are very jumpy and unstable, so a lot of smoothing is needed to give the performers control over the values output. In this case, the input numbers streams were converted to DSP signals. From these signals, the envelops were extracted and these envelop values were further averaged before constricting the range. Each Performers control values were sent over bundled together and sent over UDP when ever there was a recoded change in value.

The controls derived from the Audio data tried to measure the quantity of change in spectral content of each signal and generate a ratio variable in signal 1 against signal 2. This ratio was passed as a control to the modulator/carrier ratio in the cross synthesis patch. The maxpatch performing this evaluation was called HoldingTwo.maxpatch .

MaxPatch for comparing the variability of spectral content in two signals
MaxPatch for comparing the variability of spectral content in two signals

Each audio input was recorded into a second long buffer which was read by two heads offset by half a second.  These replays were passed to [fft~] and [cartopol~] to extract the magnitude of frequency content in each bin. With MSP’s stream of frequency content, the change was measured by the difference between the streams in each signal. A leaky integrator then kept track of the change per signal over some time interval.

The rest of the audio signal processing was arranged by Morgan, working with externals developed at IRCAM. Feel free to peruse his explanations of these at his website.

Advertisements

One thought on “The MorgraFinn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s