Activity Analysis is descriptive and statistical method for interpreting the temporal coordination of measured events across synchronously recorded time series, developed to investigate collections of continuous responses to music like ratings of tension.
I’ve been developing Activity Analysis since 2007, when I was presented a bunch of continuous felt emotional intensity ratings from audience members attending a concert of orchestral Mozart music and asked to do what I could with them. Looking at the music cognition literature, I wasn’t all that pleased with the analytic options already in practice. Some were numerical dubious, others more statistically justified but none did a very good job of telling me WHEN interesting or important stuff was happening in the responses. There are lots of questions one can put to time series data, some of them good, many of them less useful, and my master’s thesis and the continuous response analysis wiki came out of a long process to understand what kinds of questions we have and could ask of temporal traces of experience.
The advantage of continuous responses for music is that time makes a nice firm causal anchor on what the listener knows and can respond to. While someone can have expectations (and memories) reaching into the future, and thoughts and feelings tying to the notes that have passed, the chronology of presentation and the present are essential to the reception of music (and many other parts of the conscious experience). We have been recording continuous traces of listener experience for decades but the temporal power of these data has mostly lain dormant.
Well, I’m not one to let sleeping dogs lie (or sleeping dragons, for that matter) so I pulled up my numerical bootstraps (har har) and tried to find a way to look at the WHEN of these responses in a way that could give a clue as to what was accident and what might be replicable. I spent a lot of time reinventing the wheel, not knowing it was just on the other side of the bookshelf. Every time an idea I’d been playing with appeared in someone else work was greatly validating: there were names for things I was doing, numerical choice I was making, which make my work seem less like the deranged efforts of a music scholar with a mathematical bent and more like a defenceable statistical tool.
After six years, several busts of insight (many via the writings of others), and the patience of a few key persons, I’ve got a set of tools which do what I’d needed when I first looked at continuous ratings to the Marriage of Figaro Overture. I can choose a kind of response behaviour, an event, and look at how that event is distributed across a collection of responses, the activity. I can test whether that activity is unlikely in its distribution, and I can even look moment by moment to evaluate which show unremarkable concurrent activity and which show surprising agreement across responses. And you can too: the activity analysis toolbox is now released to the public on github.
Activity analysis was initially designed for collections of continuous ratings, but it has grown to be applicable to evaluating other types of time series collections, in particular to psychophysiological measures of emotion/response. The approach is data-driven, making relatively few assumptions about the nature of the time series (depending on the tests) and could really be applied to any data which consisted of 12 to 60 synchronous time series in which the events of interest occur at a reasonable rate (reasonable should be reevaluated.)
I plan on posting a version of a workshop I gave to my old lab on the toolbox, one of these days when I find myself twiddling my thumbs. Soon.