Musical prediction in the performer and the listener: evidence from eye movements, reaction time, and TMS
Hadley, Lauren Victoria
Musical engagement can take many forms, from the lone pianist rehearsing in their study, to the headphone-wielding teenager on the bus, or even the orchestral musician on stage. Although much music research dissociates the performer from the listener (a differentiation starkly demonstrated in the layout of the concert hall), in this thesis I consider the performer and listener as two sides of the same coin. This thesis therefore empirically investigates musical prediction in the solo performer and the solo listener, then brings these together by investigating musical predictions in a turn-taking musical interaction. I begin by presenting a theoretical account of musical prediction. I propose a common mechanism to underlie predictions during both music performance and music listening, based on motor simulation of observed (seen or heard) music. This theory is developed from that of Pickering and Garrod (2013), and is suggested to span communicative joint action contexts. I then present three sets of experiments. In the first, I use eye-tracking to show that pianists incrementally process musical progressions during sight-reading. By measuring the rate of regression from an anomalous musical bar, I demonstrate that musicians look back to earlier portions of a melody more often when they read a bar that forms a less common musical progression than when they read a bar that forms a more common musical progression. This effect parallels that found for anomalous word reading in language, and provides a promising new paradigm through which to investigate music processing. In the second set of experiments, I use the timing of turn-end judgements to show that non-expert music listeners use tonality cues to predict the end of a musical solo. By presenting listeners with musical turns in two different styles: jazz improvisation or free improvisation, I show that the use of a tonal framework facilitates the accuracy of turn-end judgements. I confirm that this benefit is based on tonal information by filtering the extracts to either include or exclude pitch information. When pitch information is removed from the (tonal) jazz improvisations, turn-end accuracy falls. No such detriment is induced by removing pitch information for the (non-tonal) free improvisations, or by removing other spectral information. In the third set of experiments, I use transcranial magnetic stimulation (TMS) to investigate turn-taking. Turn-taking involves listening to a partner, predicting when they will end and hence when to come in oneself, and finally entering for one’s own part accurately. In my first experiment I apply TMS to the primary motor cortex and suggest that the predictability of a partner’s part modulates the timecourse of one’s own motor preparation. In my second experiment I apply TMS to the dorsal premotor cortex (involved in motor simulation) and demonstrate that when a partner’s part is in one’s own motor repertoire, the dPMC plays a causal role in the accuracy of one’s own performance. This involvement of the dPMC is consistent with motor simulation being used to predict a partner’s ending in a turn-taking context. Together this set of experiments explores prediction in music production and comprehension. My studies of music reading and music listening indicate that prediction is similar across comprehension domains. My studies of interaction indicate that comprehension may depend on production processes. I suggest that together my findings therefore imply that predictions made by performers and listeners are based on similar processes, and more specifically, that prediction during comprehension may involve motor simulation.