Glass Vocoder Project:
Building an Interactive Learning Model of the Human Vocal Tract

You can follow the progress of this project here.

Visible Speech

In 1867, Alexander Melville Bell (father of Alexander Graham Bell) created something he called "Visible Speech" a physiologically-based notation system for use in his work on teaching the deaf to speak.

Alexander Graham Bell later continued his father's work, finding a colleague in German physicist Herman von Helmholtz, and between the two of them they developed a really beautiful and experience-able understanding of how the human voice produces speech.

In his lectures on Visible Speech, Alexander Graham Bell was especially keen on propagating the concept of vocal resonances--the idea that the vocal tract is a series of resonant chambers that we shape (or 'tune') to create the different complex pitches that make up speech sounds.

Visible Speech was more than theoretical. As people were attempting to 'draw' the act of speaking, they were also trying to understand and replicate the physical mechanisms underlying it. The 19th century was awash with fascination for automata, and in the linguistic realm, many mechanical models of the vocal tract were attempted in order to better understand the acoustics of speech. (Graham Bell actually fashioned his first semi-sucessful attempt at mechanized speech as a child!)

Today there are many different advanced methods for spectral analysis and speech synthesis that allow for a lot more detail and realism, but they're often either abstracted from anatomy or visually-based (in the form of spectrographs). Something that seems to have slipped between the cracks is the idea that every person who speaks or understands an audible language is incredibly musical--they must have very specific recognition of frequencies and the complex relationships between them in order to differentiate one phoneme from an other.

(I recently came across some researchers at Yale who are studying speech recognition and synthesis by creating sentences out of complex melodic structures layered on top of one another, harking back to Helmholtz and Alexander Graham's idea of speech as musical, resonance-based information.)

A Proposal and Work in Progress.

The Glass Vocoder Project is an attempt to better understand simple acoustic and musical representations of complex speech information. I'm currently prototyping simple ways for people to tune and play glass bottles at various pitches in order to create and explore different vocal tract resonances in a hands and ears-on way. Eventually as my research progresses I will be building an entirely acoustic glass speech synthesizer based on pitch relationships and melodic progressions.


A study for playing two glass bottles simultaneously.