In the Solo Response Project, I recorded my own responses to a couple dozen pieces of music everyday for most of a month, self report and psychophysiological, to generate a data set that would let me compare experiences as captured through these measurement systems. The data set has mostly been used behind the scenes to tune signal processing and statistics, but there is plenty to learn about the music as well, given how I reacted to these stimuli.
On the project website, there is now a complete set of stimulus-wise posts sharing plots of how I responded to these pieces of music as they played and over successive listenings. Each post includes a recording of the stimulus (more or less), and figures about each of:
Continuous felt emotion ratings,
facial surface Electromyography (Zygomaticus and Corrugator) and of the upper Trapezius,
Heart rate and Respiration rate,
Skin Conductance and Finger Temperature.
The text doesn’t explain much but those familiar with any of these signals will find it interesting to see how a single participant’s responses can vary over time. Some highlights from the amalgam above (left to right, top to bottom):
The familiar subito fortissimo [100s] and continued thundering in O Fortuna from Carmina Burana is so effective that my skin conductance kept peaking through that final section. (At least on those days when GSR was being picked up at all.)
Some instances of respiratory phase aligning were unbelievably strong, for example to Theiving Boy by Cleo Laine [85s].
Evidence that I still can’t help but smile at the way Charles Trenet pronounces the word play in “Boum!” (“flic-flac-flic-flic” [60s])
Self-reported felt emotional responses can change from listening to listening, particularly to complex stimuli like Beethoven’s String Quartet No. 14 in C-sharp minor.
I’ll be writing about the responses to each stimulus, methodological decisions around the analysis, linking to data and code repositories for anyone keen on playing with these numbers and methods, and maybe even get some special guests to post on responses collected.
So if you are keen, pick up the rss feed, bookmark the blog, or follow me on twitter, where I’ll point to interesting posts from time to time.
In the coming months, I’ll be taking results from the solo response project to several conferences, and reviewer feedback has me worried about people dismissing this data because I collected the data from myself. I keep getting distracted by these imaginary confrontations with suspicious researchers so it’s time I lay down some concisely-expressed arguments to appease the hypothetical skeptics.
Before I forget everything, let me get down the setup details of the experiment I ran last summer (2012). Besides selecting the 25 pieces and working out where I was going to run the experiment, there was a lot of other relevant details. The following descriptions are for the purpose of documenting the experiment’s methodology; I hope anyone interested in employing these methods will seek higher authorities for instructions of best practices.
Stephen McAdams was kind enough to let me borrow some CIRMMT equipment (Thought Technologies’ ProComp Infiniti and a pile of sensors) and occupy some of his lab space for a month. Though a little casual by some standards, I made myself a cubical out of spare sound absorption panels in a large room that was usually unoccupied while I was recording. To get data from the ProComp in a useful format, Bennet Smith helped sort out some of his old scripts that conveniently time stamped the physiological sensor data and packaged it as UDP messages. That left me with getting some system set up run the experiment, provide a behavioural response interface, and save the recorded responses in a reliable fashion. Continue reading “Solo Response recording set up”→
While I am hoping to soon start blogging about responses to each stimulus used in the solo response project, it will take a bit longer to get all the signals tidied and a format of analysis set. In the interim, here is a simple version of the mini talk I presented at the most recent NEMCOG meeting with links to audio for each example.
Variability of Emotional Valence: Inconsistencies in self-report continuous emotion ratings – Finn Upham, New York University
Listeners often report feeling emotions in response to music, whether happy or sad. Empirical work on continuous reports of felt emotion are however often challenged by substantial variation in emotional dynamics reported by different participants. Variability in responses is supposed to be caused in part by differences in individual listeners’ musical expertise, sensitivity and cultural background. Working with multiple responses from a single human subject should then make it easier to explore which other factors contribute to the variation in felt emotions reported during music listening.
This summer, I collect continuous responses from myself (see the solo response project post). The analysis to follow uses the two dimensional felt emotion ratings (Arousal X Valence) as well as information from notes collected during the listening sessions.