Tag Archives: prosody

The Song Is You

Long fascinated by the crossover between music and language, I was delighted to come across a dissertation by Jonathan Pearl entitled Music and Language: The Notebooks of Leoš Janáček. The Czech (or more accurately Moravian) composer was taken by the idea that character was manifest in prosody and strove to come up with melodies for his operatic characters which were true to the music of their speech.

Jonathan Pearl does a much better job of explaining it – either here in the full-length dissertation or here in a shorter version (look for Eavesdropping with a Master: Leos Janácek and the Music of Speech). Very interesting reading!

Illustrating this idea with a single YouTube clip is tricky so instead let me embed a clip of one of Janáček’s most famous non-operatic works – the final movement of his Sinfonietta, conducted here by Pierre Boulez. Listen out for great trumpet section work at 5:00:[kml_flashembed movie="http://www.youtube.com/v/d5QBSMjdIFI?rel=0" width="425" height="350" wmode="transparent" /]

Prosody Revisited

Tidying up at the end of a primary school day, I was delighted when two P5 girls helped out without being asked. For some reason, best known to themselves, they burst into an animated version of Two Little Dickie Birds. Then one suggested, “why don’t we play that song?” I replied,  “we could, but I’m wondering if it’s more of a poem than a song. If we took the words away, would there be any tune left for us to play?”  After a moment’s reflection, one said:

Da dada dada da,  dada dada da  –

Da da dada – ,  da da da 

Dada da dada – ,  dada da da

Da da dada – ,  da da da  –

The inflections in the voice were identical to the version with words.

So, what is the prosodic equivalent of the popular line, “I’m a poet and didn’t know it?”

Prosody

Having recently attended Music: An Explanation by a Guitar Hero, which concluded with some deliberations on prosody (the music of speech which amplifies meaning), I chanced upon an inspirational TED talk by film critic, Robert Ebert, who lost his lower jaw, and his speech, through cancer.

Exploring text-to-speech technology, he found that, unless he entered very time-consuming XML coding, the prosody was never quite right. Work is currently in progress with Edinburgh-based company, CereProc to refine his voice, using recorded material from Ebert’s television archive. Exploring their site, I was quite astonished at how far along the speech synthesis road things have travelled. You can hear some of their voices here or type in your own text and choose a voice here. While CereProc finish their refinements, Ebert is using Apple’s Alex voice.

It is very touching to see how Ebert responds during the talk. The words are his own but his wife and two other close friends help out with reading. Despite the fact that the oral delivery is at one remove, he gestures as though delivering the words personally.

Let me, once again, flag up some interesting lectures on prosody by Peter Roach.

Music: An Explanation by a Guitar Hero

Better late than never? Having been on holiday I’m a little late with this short write-up of an Edinburgh International Science Festival event but, as it was so good, here goes.

Dr. Mark Lewney is a physicist and a guitarist. Last year I went to his excellent Rock Guitar in 11 Dimensions and reviewed it here. This year he presented Music: An Explanation by a Guitar Hero – a look at the physics underlying sound/music. Without wishing to spoil the show for those who may have the chance to see it later, let me say that he took us on an engaging journey from the sine wave – through the world of harmonics (overtones), the importance of the fundamental, 4th and 5th notes, the short step from there to the pentatonic scale, which is used in folk musics across the world – notably in the blues.

He finished the talk with some thoughts on music’s purpose in our evolution – the topic of much debate – such as from 2:24 – 7:03 in this video). One thing is clear, though: prosody (the music of speech) matters – it’s not just what you say it’s how you say it.

This was an excellent, funny and informative presentation. This cross-curricular take on life is, I feel, at the heart of CfE.

You can see Mark Lewney in action in YouTube videos here.

My further explorations on prosody took me here to a fascinating series of lectures by Peter Roach

p.s. 

I forgot to mention one of the most elucidating facts of the evening – and one of the simplest. 

When non-musicians ask musicians why orchestras need conductors, there are many common answers: 

  • apart from waving the baton, the conductor is the person who has led rehearsals and is in charge of the interpretation
  • orchestral players can end up sitting many metres away from their colleagues and it’s hard to hear – conductors can ensure the overall balance and timing of the group
  • the conductor is the fore-runner of drummer and mixing desk

 However, Mark Lewney’s audience participation illustration was much better and more direct and more memorable. He asked the audience to clap to a beat which, having started, he left in our hands – with out eyes shut. The timing soon began to drift. He asked us to open our eyes and sync with him. The timing improved. Closing our eyes again, the timing deteriorated. Opening them, and following his lead, we were back in sync. The reason? Light travels approximately 874,000 times faster than sound*. Relying on the sound, we had to wait for it to bounce off the back wall of the hall. Syncing to the beat, we were exactly in time.

 * Speed of sound

  •  343.2 metres per second
  • 1,236 kilometres per hour
  • 768 miles per hour

 Speed of light 

  • 299,792,458 metres per second
  • 1,079,000,000 kilometres per hour
  • 671,000,000 miles per hour

A question of tone

One of the themes of this blog, if such a thing could be said to exist, is the endeavour to see music in its wider setting (society, culture), through exploring links with other disciplines (language, science). In that regard, I’m always grateful to receive invitations to talks in Edinburgh University’s Institute for Music in Human and Social Development (IMHSD).

On Tuesday 2nd November, I attended a talk by Professor Bob Ladd entitled Suprasegmantel phonemic distinctions in Dinka speech and song. The Dinka people form the largest ethnic grouping of Southern Sudan. Allow me to quote Professor Ladd’s own summary of Dinka song tradition:
Making and singing songs is an integral part of Dinka culture. Songs are used to chronicle all aspects of individual and communal experience: to tell stories, to insult rivals or enemies, to praise family or cattle, and so on. Songs are typically sung solo or in unison, accompanied (if at all) by clapping or simple drumming. Rhythm is generally a simple regular pulse, and song segments or phrases may be of different lengths with no overarching metrical structure. Scale is uniformly pentatonic.

For those who, like me, are interested in languages but are a little vague about the vocabulary of the science of linguistics, permit me to attempt to unpack the title of the talk – Suprasegmental phonemic distinctions in Dinka speech and song:
  • Segment – the individual sounds which make up speech
  • Phoneme – the smallest segment is known as a phoneme e.g. the word bad has one only syllable, but three phonemes: b – a – d
  • Suprasegmental – a phenomenon can be described as suprasegmental when it takes place over two or more segments e.g. prosody, tone, stress.
Professor Ladd described to us his work as part of a wider project – Metre and melody in Dinka speech and song . Specifically, he and his colleagues are exploring how a language which relies on musical phenomena (pitch, duration, timbre) for meaning is set to music. Do the two languages intuitively come together? Is there a clash of pitch and duration imperatives? If so, which one yields and when?
Three musical components of Dinka prosody (a Nilotic language) were featured:
  • Tone – there are four tone phonemes – high, low, rising, falling
  • Quantity – there are three lengths of vowel – short, medium & long
  • Voice Quality – there are two voice qualities – modal (normal voice) and breathy (somewhere along the journey from whispering to normal speaking)
The combination of these sound options, when mixed with seven possible vowel sounds, allows for 168 possibilities, most of which occur in regular usage. At first glance, it would be impossible to believe that such a spectrum could be reduced in any way without meaning being compromised.
One further feature essential to understanding the rhythmic aspect of setting of words to music is that most stems are monosyllabic – consonant-vowel-consonant or consonant-glide-vowel-consonant.
Here is some example of such singing:[kml_flashembed movie="http://www.youtube.com/v/lz6aPMsdY5I?rel=0" width="425" height="350" wmode="transparent" /]
Despite the many musical features of this language, it would seem that linguistic constraints are over-ridden by musical ones, without any obvious loss of understanding. Professor Ladd’s own parallel with this was that we can easily understand people when they whisper, despite the loss of pitch and timbre involved.
I found myself wondering whether – given the monosyllabic nature of the language, and the prevalence of the pentatonic scale – there was a tendency to align important words e.g. verbs with structural notes of the scale (do-mi-so) and less important words e.g. prepositions with the less important ones (re-la). It seems that this hasn’t (yet) been explored.
I found this a thoroughly engaging talk, not least because it made me realise how much we take for granted in the field of word setting. Possibly, this is because our culture is one which leaves word setting to experts. I look forward to discovering more about the project.

Timing is everything

I recently read something in Steven Mithen‘s excellently written and thought provoking book The Singing Neanderthals which stopped me in my tracks. The passage concerned the research, by Professor Willi Steinke of Queens University in Kingston, Canada, into the melodic recall of a subject with amusia, following a stroke at the age of 64. The subject was unable to identify many well-known instrumental themes. However, when themes with lyrics were played, recall was normal – even although the lyrics were not present! Steinke and his colleagues concluded that melody and lyrics were stored in different parts of the brain – the prosody of the lyrics helping to summon up the tune, and the rhythms of the tune aiding the reverse.

Suddenly my mind jumped back 42 years to my first piano tutor book, in which every melody featured lyrics – added after the event by the author, John W. Schaum. At the time I regarded them as a slightly annoying irrelevance because I was six years old and knew everything. Now the aspiration behind them seems clear. I began to think that, although the beginners’ materials I use have no lyrics, there may be an argument for adding some – more particularly for asking the pupils to add their own.

By an amazing coincidence of timing, this topic was brought up at our in service on Thursday, by one of my colleagues who was keen to discover similarities and differences in our approaches to teaching rhythm. Recommendations and reservations were expressed – the latter concerning examples where words had been forced to fit rhythms in an unnatural way, and possible confusion arising from the differing prosody of varying accents and dialects.

Still – it’s something interesting to think about. Any experiences, views, recommendations to offer?