Leonard Bernstein is one of the most significant composers and conductors of the 20th century, best-known for his Broadway hit Westside Story. After studying music at Harvard and Curtis, Bernstein eventually went on to assume the Charles Eliot Norton Professor of Poetry at the university in 1972. The following year, he presented a Norton Lecture series called The Unanswered Question, inspired by a composition of the same name by Charles Ives, to “redefine” music. After searching through several fields of study, Bernstein stumbles upon linguistics as a suitable form of comparison, and how we can use linguistic principles to redefine Western music.
The Unanswered Question
The lecture series sought to provide a deep and refined understanding of music theory, drawing upon interdisciplinary influences to explain musical concepts to an audience who may not be as well versed in music theory. We see how music can be compared through its phonological, syntactic, and semantic properties. Can we use linguistic theories to better understand music? Is there something to be said about the similarity of these two disciplines? These are just a few of the issues Bernstein attempts to tackle.
Lecture One: Musical Phonology
The first lecture Bernstein presented was concerning the study of sounds, or phonemes, and how phonological terminology can be applied to music through his idea of “musical phonology”.
Phonology deals with systems of sounds both within a language or between different languages. As in linguistics, Bernstein showcases that there is a harmonic series in music, where when one note is played, we hear a “consistent structure” of vibrations so that we perceive the sound as a note rather than as noise.
The Harmonic Series
Giving the example of a C note, Bernstein explains how this musical “phoneme” has the overtones G and E that comprise a major triad; through these notes, Bernstein creates a wider structure that he calls syntax, which in reference to music is a scale.
As well as the concept of the harmonic series, Bernstein mentions monogenesis, which is the hypothetical genetic predecessor of all spoken human languages. In music, there may be a similar single point of origin, out of which the harmonic series is born. An example he gives is the “musical linguistic universal” of children’s nursery rhymes, determined by the use of two distinct notes (see video below).
Monogenesis in Music
Monogenesis in linguistics similarly argues that, for example, sounds such as “ma” are universally used in reference to one’s mother – therefore suggesting that many languages have come from the same point of origin.
As Bernstein fundamentally agrees with the monogenetic viewpoint of music, he also acknowledges Chomsky’s idea of universal grammar, which posits that there is a shared system of grammar between all languages that all humans have access to. Musical understanding can be innate, much like grammatical competence.
Noam Chomsky, theoretical linguist,
cognitive scientist, and philosopher.
Lecture Two: Musical Syntax
Once he has identified that a musical note and a chord share phonological similarities with language, Bernstein builds upon this through “musical syntax”. Syntax is the structural organisation of words into sentences. To build on this, Bernstein likens music theory to generative grammar, which is a set of grammatical rules of which the output is all the possible sentences of a language.
A theoretical understanding of musical principles, such as major and minor chords or rhythm, can be applied to generate combinations of musical sequences. This echoes how in language, phonemes are built up to create an infinite combination of utterances.
Lecture Three: Musical Semantics
Lastly, he applies the term “musical semantics”, wherein semantics is the study of meaning in language. Bernstein claims that meaning in music is metaphorical, not “literal.” Bernstein argues that much like literary metaphors, the meaning of music can be identified through the alternation of musical material so that it appears almost like an illusion.
Another way in which Bernstein interprets musical semantics is through nonmusical – or “extramusical” – elements, such as whether a certain musical passage conjures up a particular image, such as of a place or person.
Bernstein draws parallels to Chomsky’s i-language theory of language being innate to the mind, arguing that several areas of music theory all constitute to the “semantics” of music. Bernstein, however, also claims that every musical metaphor “[doesn’t] require even that one millisecond before perceiving it”, arguing that humans innately understand music – like they innately understand grammar – and that the hearing of music alone is enough for musical perception.
Is Bernstein correct to compare music and linguistics?
Whilst Bernstein rightfully disregards the issue of whether music can be considered language, as he acknowledges that language is simply a tool to find commonality between the two disciplines, he has been met with criticism surrounding his approach to linguistics.
For example, Chomsky (1972) himself has disagreed with the notion of extending linguistic principles to other disciplines. The most notable criticism comes from Allan Keiler. Despite the captivating musical knowledge bestowed by Bernstein, the Norton Lectures are not a “well-conceived or rigorous contribution” to linguistic studies, as it fails to account for how, for example, ethnomusicological studies have concluded that foreign musical styles can be acquired, rather than just being innate, and so Bernstein’s lack of consideration of musical styles from non-Western cultures detracts from his argument of universal musical competence.
Despite such criticisms, the Norton Lectures weren’t designed to contribute to the pantheon of linguistics. Rather, they offer a valuable insight into how music functions to musicians and non-musicians alike through an analogy of linguistics. Nevertheless, they also provide a simple introduction to linguistic principles and help music lovers to perceive music through this fascinating lens.