Deviations in Quiet Breathing during Music Listening

Standard

(This is post is derived from a poster presentation at the 14th International Conference of Music Perception and Cognition, hosted in San Francisco, CA, USA, July 4th-9th, 2016)

Music listeners often fall into quiet breathing and yet music has been shown to influence when individual listeners inhale. Here is an explanation of how deviations in quiet breathing can be measured in the respiratory sequence, and tests of how these deviations can depend on the musical work.

Defining Quiet Breath

When we are at rest and not preparing to act or thinking about acting, our bodies generally fall into the state of quiet breathing:

  • Moderate depth
  • Short inspiralation, ~1 s
  • Short elastic expiration ~ 2.2 s
  • Stable periodic cycle

Quiet breathing is efficient and discrete, a respiratory sequence that does not require attention or conscious control. Compared breathing behaviour during physical actions, the regularity of quiet breathing suggests that it should be relatively easy to model.

Continue reading

Breathing in Music: Measuring and Marking Time

Standard

(This is post is derived from a poster presentation at the Making Time in Music conference, hosted by the Faculty of Music of Oxford University, Sept 14-16th, 2016)

Abstract

Our breath marks time for the entirety of our lives. Whether a period of 2 seconds or 20, we know roughly how it will continue or be adjusted to new demands, and this need for fresh air imposes an inescapable rhythm just beyond what is readily heard as metrical. We use breath to communicate with speech and affective displays, but we also monitor each others’ breathing and use this information to coordinate interactions: breathing in anti-phase when in dialogue, or together when synchronising actions. Obviously, musical activities such as singing and playing wind instruments involve exhalations and the particular physical constraints of our respiratory system. Other components of breath are used to prepare and set the timing of actions. For example, the inhalation at the beginning of a piece defines tempo and intensity for many solo performers and small ensembles, and some types of musicians are extremely practiced at picking up all that is needed to play in synch from one careful gasp. We might consider breath to be auxiliary to the actions of music making, just a means to the sound, but this biological system may be play a fundamental role in our understanding of music and musical time. There is growing evidence that listening to music can engage our respiratory system, drawing us into a specific physical division of time. This coordination is not so strict as breathing with the heard performers, but rather a subtle alignment of phase at specific moments in a particular piece. For this to occur, even intermittently, our respiratory system must be engaged in the work of understanding what we hear. Voluntarily or unconsciously, breathing informs synchrony on the scale of milliseconds, seconds, and minutes, and this phasic and adaptive system promises to be powerful in defining musical time both physically and metaphorically.

Continue reading

About that dissertation

Standard

I’d been keeping quiet about my thesis research prior to running experiments, but an update is now long over due.

My dissertation is on the measurement of changes in respiration during music listening, capturing when and how a (seated, attentive) listener’s breathing changes. It’s messy measurements from multiple data sets, and musical stimuli of many genres, and lots of heavy non-parametric statistics, and it makes me smile every time I work on it. Considering the respiratory cycle is not terribly difficult to track passively, the more information we can gather via this discrete signal, the better, right? There are a lot of potential uses, a lot of different types of information we might glean from this signal, but I am hesitant to write out these possibilities before more I’ve completed a few more tests.

A(nother) definition of music

Standard

At last summer’s SMPC, I shared a quasi-interactive poster with my most current definition of music. The poster invited viewers to add examples or counter-examples of musical experiences via post-its to where ever it seemed spatially appropriate. Since then, the poster has been in the PhD office at NYU, and a couple more edges cases have been added. Still, the definition stands.

It goes as follows:

Music is a broadcast signal enabling sustained concurrent action.

My claim is that these six terms form a necessary condition for something to be perceived as music or musical. Perception here is relevant as our processing of sensory information adapts to extract useful information for sounds and signals, and the relevance of music and its various qualities are displayed in the structure of these perception strategies. But by using our perceptual processes to define music, the associated experiences might not all fall within with our culture’s delimitations on the concept.

Screen Shot 2016-06-08 at 22.01.43

The attached poster does the work of explaining each of the terms and their relevance, but I’ll add an important challenge to the definitions.

“What about the wildebeests?”

This was asked by a fellow grad student, with a grin, but the question is reasonable. A herd of wildebeests running sounds and feels thunderous, any member of the herd would hear it as coming from it’s herd-mates, and this sound inspires a strong impulse to run too, an obvious instance of sustained concurrent action. So is the sound of a running herd music to a wildebeest ears? I would have to say maybe, conditioned on the two remaining terms: signal and enabling. For the sound to be a signal, it would have to transmit so kind of intentional herd-running, individual members falling into a special running style, with perhaps some extra regularity or heaviness to their gait. The enabling bit is a little more tricky. Music doesn’t determine action, instead, it gives us some well fitting options. For the sound of a running herd to enable a single wildebeest’s actions, said individual wildebeest should be able to resist the suggestion to join in and and have some choice as to how, if the suggestion is accepted. Having no familiarity with the running habits of ungulates of any kind, I can’t be more specific.

A similar human case came to mind recently when I crossed paths with #OrangeVest, a performance art piece by Georgia Lale about the ongoing Syrian Refugee crisis. A block of some twenty adults in orange life vests were marching slowly and silently through the streets of New York, with helpers around to shoo traffic and explain the action. In an instant, I recognized the deliberateness in their movements, their aura of stillness, and I felt the tug to step in line. But instead, I waited for them to pass and looked up the project later. If you feel inspired to lend some (more) support to the cause, consider donating to MOAS, Refugee Support Network, or your preferred means of distributing humanitarian aid.

From Matlab to Python

Standard

After years of unexplicable failure, I’ve finally gotten numpy and scipy to play nice on my computer. (Anaconda finally installed properly once I moved to Yosemite; who knows what was breaking the system before…) So now I can finally start converting my matlab toolbox for Activity Analysis to open source python, and in the process, and share analyses online with ipython notebook. Whose excited? well I am. I think it will make it easier to follow what the calculations and inferences, particularly to those who aren’t inclined to download a toolbox and run a demo on their own machines.

Music and coordinated experience in time: Back to Activity Analysis

Standard

There are two comically extreme positions on how music (or really any stimulus) affects observers. At one end, the position that all of our experiences are equivalent, dictated by the common signal, at the other, individual subjectivities make our impressions and reactions irreconcilable. In studying how people respond to music, it’s obvious that the reality lies somewhere in the middle: parts of our experience can match that of others, though differences and conflicts persist. I’ve spent years developing this thing called activity analysis to explore and grade the distance between absolute agreement and complete disarray in the responses measured across people sharing a common experience.

As people attend to a time varying stimulus (like music) their experience develops moment by moment, changes prompted by events in the action observed. What we have, in activity analysis, is a means of exploring and statistically assessing how strongly the shared music coordinates these changes in response. So if we are tracking smiles in an audience during a concert, we can evaluate the probability that those smiles are prompted by specific moments in the performance, and from there have some expectation of how another audience may respond.

If everyone agreed with each other, this would not be necessary, and if nothing was common between listeners’ experience, this would not be possible. Instead empirical data appears to wander in between, and with that variation comes the opportunity to study factors nudging inter-response agreement one way or the other. We’ve seen extreme coherence, that of the crowd singing together at the top of their lungs in a stadium saturated with amplified sound, and polite but disoriented disengagement is a common response to someone else’s favourite music. We need to test the many theories on why so many different response (and distributions of responses) arise from shared experiences, and Activity Analysis can help with that. Finally.

Here is hoping I can get back to sharing examples of what this approach to collections of continuous responses makes possible. The data and analyses have been waited too long already.