Why Music Exists: An Exploration of the Lexicon of Sound
Patrick W Farrell

“…whilst this planet has gone cycling on according to the fixed law of gravity, from so     simple a beginning endless forms most beautiful and most wonderful have been, and are     being, evolved.” (Darwin, 1859)

It was with these words that Darwin concluded his treatise on the nature of existence. Human beings, a product of that existence are the beneficiaries of many millennia of biological trial and error. As a species we have an incredibly evolved facility for sensing and sharing data. Yet despite all the options now at our disposal for sharing data we consistently revert to a medium as old as the universe itself. That medium is music. Why? Because after all is said and done, despite our incredibly evolved faculties, we are herd animals. We seek support, we need to belong, we long for understanding. The paradox of humanity is this: the very path which brought us here, evolution, has left us stranded in vivid four dimensional mental realities which cannot be accurately portrayed using any of the communication channels we possess. Yet we long to express ourselves and be understood because of the herd instincts (physical and metaphysical) impressed upon us by our forbears. Music has proliferated through the age of mass communication because being of nature, it is the only true conduit of human thought. It is our greatest communicative tool for sharing our mental realities and uniting the herd. (Bannan, 1999)

The Origin of Species heralded the dawn of a new era. It was the commencement of unprecedented navel gazing by humans seeking to understand humans. As it turned out, we were born of an incredible string of events, made plausible by this simple equation: chaos theory + lots and lots of time. This birthed an environment in which natural selection culled the progeny of that initial equation. Within the vast reality called existence, at some point, for reasons widely debated an unassuming rock or two drifted into being. These rocks related to each other through the medium of gravity. Some of these rocks were notrocks at all, but rather gaseous satellites, the greatest of which would have the most persuasion in this dialogue. This outspoken orator of gravity would burn a light in the darkness of existence and shine the way of things to come. 149,600,000 kilometres away a noxious soup was brewing. A chemical game of chance was taking place, and the victor, by default, was that which responded to the light shining for so long in the distance. On that day, when the game was won, there was no fanfare, just an awkward little ‘plop’ as a single bubble of oxygen struggled through the ancient soup of failed experiments and escaped into the noxious atmosphere. This process of transmuting the Sun’s energy into oxygen is called photosynthesis and supports almost all the life on our little rock called Earth. The organisms which eventually bubbled forth from that liquid of pre-history Earth were a parade of losers in a race to synthesize their needs with the gifts of the universe. Occasionally mutant anomalies were produced by these multiplying hopefuls and it was often these mutants which discovered a new, more efficient way of existing in the universe. This process was later named (by another mutant) ‘Natural Selection‘.

How has the nature of sound influenced us?

The universe is a persistent thing. From the human perspective it is almost eternal. Life on the other hand, is extremely plastic (Darwin, 1859). In this way every living thing has morphed and adapted through the process of Natural Selection. However this influence goes far deeper than ‘survival of the fittest’. It defines the logic of life itself, because the perpetual data being imposed on organisms by the universe must be perceived logically by those organisms in order to prosper. Most of the data in the universe is logarithmic (Frazer, 2008). Sound and life are generated according to (essentially) consistent ratios. In turn the human devices for receiving that data information, having evolved through trial and error since life began, are (mostly) logarithmic in function. Take for example sound. Our perception of sound is shaped by the phenomenon known as the Harmonic Series. When a single sine wave is generated it instantly generates sympathetic sine waves oscillating at perfect proportions to its length. These sympathetic waves are integer factors of the initial wave (Wikipedia contributors, 2009). In other words, wave two is half the length (and twice as fast as wave one), wave three is half the length (and twice as fast) as wave two, and 1/3 the length (three times faster) than wave one. This bundle of waves can be thought of as a mass of partial waves generating one full sounding tone, the pitch of which is the longest/strongest wave (the fundamental). The upper partials fade in intensity as they diminish in length. Depending on the acoustic space and the medium through which sound is being generated, these partials may be clearly audible or barely perceptible. However their influence on the nature of sound, and our use of it, is profound. Because humans hear sound logarithmically and not linearly, our perception of pitch correlates with frequency ratios, not frequencies themselves (Wikipedia contributors, 2009). If for example the table above represents a fundamental tone at 220Hz, then the second partial (1/2) will be 440Hz and the third (1/3) wave will be 660Hz. When hearing these tones the human ear recognises that the second partial (440Hz) is 2:1 the f of the first (A), and thus associates the two tones as being the same input signal doubled in strength. This phenomenon is known to musicians as an octave. The third partial (660Hz) is recognised as being 3:2 the f (frequency) of the second partial (A) and is recognised as a being a fifth away (E).
The beautiful truth behind this correlation is that we perceive sound in exactly the same way in which it behaves. If we had linear pitch perception, the sounds of the universe would be nonsensical to us because the universe is not linear. Of course the correlation between our minds and the universe is a natural progression of evolution. Just as the Golden Ratio would invariably shapes our visual aesthetic, the harmonic series which occurs within every tone has shaped our aural aesthetic. In fact, it is interesting to note that the ratio of the first harmony generated in the harmonic series (excluding the octave) has a ratio of 3:2 which can be expressed as 0.666 which approximates the Golden Ratio (0.618). (Dimond, 2008)

Ratios of harmonies, like that of a fifth mentioned above, are calculated by comparing the wave lengths of the pitches involved (Schmidt-Jones, 2009). For example if the third partial of A is E (3:1 in the harmonic series) and the fourth partial is A (4:1) than the sound of those tones playing simultaneously is 4:3 because there are four waveforms of A in the space of every three waveforms of E. Once again, because our ears work logarithmically, this interval 4:3 sounds the same all the way through the frequency range and is expressed in Western musical notation as a ’perfect fourth‘. As the ratio created between tones becomes more complex, the more ‘dissonant’ is the sound perceived by the human ear. For example a flat nine, often sited as the most dissonant interval in the 12 tone system causes the ratio of 17:16. The depth of complexity in the harmonic system reveals itself when one considers that with every pair of notes, there is a new array of sympathetic wave formations all creating their own ratios, which in turn create their own relationships etc. In theory the process is endless. In addition extremely consonant harmonies support each other in ’real time’ by producing additional harmonics which ’support’ each other. This results in full sounding ‘thick’ harmonies, such as those proliferating the Gregorian Chant tradition. For example the interval of a perfect fifth is a ’warm’ sounding interval, not because it is an extremely simple ratio (3:2), but rather because of the ‘interlaced’ matrix of over tones each fundamental creates:

Fifth:    G    G    D    G

Tonic:    C    C    G    C

The texture (timbre) of the G is ’warmer’ or ‘thicker’ because the note is actually sounding four times across three octaves even though only two notes are being played.

The sound of Language vs. The Sound of Music

“… when a representation of some four-dimensional hunk of life has to be compressed     into the single dimension of speech, most iconicity is necessarily squeezed out. In one-    dimensional projections, an elephant is indistinguishable from a woodshed. Speech     perforce is largely arbitrary; if we speakers take pride in that, it is because in 50,000 years     or so of talking we have learned to make a virtue of necessity.” (Hockett, as cited in     Corballis, 2008)

We are all living in the same universe, governed by the same laws of physics – this
logarithmic-pitch-association/harmonic-series relationship provides all human being’s with a common medium through which to convey ideas (Bannan, 1999). Unlike speech, which begins with artificial sounds (phonemes) and applies rules to those sounds (syntax) all of which are entirely arbitrary, the language of music has syntax embedded into its phonology (the physics of sound). (Bernstein, 1976) This musical syntax can be defined as ’tension and release’ or ‘dissonance vs. consonance’. The fact that the harmonic series is an array of simple ratios fading into complex ratios has influenced the evolution of the human ear, which recognises simple ratios as consonant and complex ratios as dissonant.

This concept of an inherent musical syntax should not be confused with the notion that all musical systems developed by humans deal with dissonance in exactly the same way. If soundwaves were phonemes, and their ratios relative to one another were musical syntax, then the rendering of these ratios would be ‘musical grammar’. Cultural influence may dictate that certain complexities of ratios are interpreted differently, however the ratios involved are accurately perceived by the ears of all humanity. For example a trained musician can recognise a flat ninth interval in a Greek zembekiko. And the tension that interval creates affects the ears of an American Country and Western singer in exactly the same way it affects a Greek bouzouki player. The only difference is that the bouzouki player enjoys the tense sound of a flat ninth because of the very tension it creates. More poignantly the Greek musician enjoys the sound of that flat ninth (17:16) resolving to a unison (1:1) after the tension has reached ‘breaking point’. It is interesting that Leonard Bernstein, in his Harvard University Norton Lecture series observed the propensity of young children to draw a common semantic implication from the melodic shape of C, A, D (slightly flatter than equal temperament), C, A – sung with the lyrics of “Nah, nah, nah-nah, nah…..”

Mankind‘s role in this relationship to the harmonic series and the logic (syntax) it presents is two fold;

1)    As witness to a phenomenon of mathematics occurring in ‘real time’.
2)    As student who has learnt over 7 million years which harmonies are right and             make ‘sense’.

The result is the same, manifest in nature there is a syntax which mankind can recognise and employ to convey his ideas via the medium of sound.

The following ‘13 Design Features of the Human Language’ were conceived by linguist Charles Hockett and provide a neat table of comparison between music and language. Hockett’s ‘13 design features of the human language’ are (Wikipedia contributors, 2009):

1    Vocal-auditory channel: Of significance when considered in the context of evolution. As with so many adaptations, the ability to articulate clearly using only our voice box (with no need for bodily communication as is so often found in animal languages) coincided with the upright skeleton which gave humans the ability to freely use their upper limbs for tasks whilecommunicating. Music is also received via the auditory channel, however it does not need to be broadcast via the vocal tract.

2    Broadcast transmission and directional reception: Describes the fact that human speech, being an auditory signal, can be heard by anyone within hearing range, and the location of the speaker (the broadcaster) can be ascertained by way of binaural direction finding. This applies to both music and speech as they are received by the same auditory channel.

3    Rapid Fading (transitoriness): Is the unique property of sound signals to fade quickly from an environment as opposed to other forms of signaling such as pheromones and smoke signals. It is this transitory nature of sound which made it such a useful tool in survival situations (Bannan, 1999). Perforce our oldest instruments, the drum and horn, were conceived as alarm devices.

4    Interchangeability: refers to the ability of humans to both receive and transmit speech. This is because speech is generated by a device common to all peoples, the descended larynx. Music can also be received by all people and all persons have the potential to create music. The complexity of their own broadcasts may vary, but they are nevertheless capable of musical signals.

5    Total Feedback: is common to both music and speech and describes the fact that humans can hear themselves while they are broadcasting. This enables them to perfect the art of speaking and playing music.

6    Specialization: Speech sounds are specialized for communication. Humans invented them to convey information. This differs from musical tones, which also communicate information, but are not artificially created for that task.

7    Semanticity: Specific signals have a specific idea. Both music and language share this property; In music complex ratios always sound more dissonant than simple ratios (although different cultures embrace dissonance differently) and in speech morphemes have specific meanings.

8    Arbitrariness: In speech there is no limitation to what can be communicated about and there is     no specific or necessary connection between the sounds used and the message being sent. Music however is not arbitrary. The frequencies of the phonemes may indeed be irrelevant to some listeners, but once those phonemes are structured into morphemes they are immovable, as the subject to which they refer is themselves. Romantic imagery on the part of the listener may be drawn from the music created, however the music itself can no more depict an unrelated subject, then it can cease to be music. I.e. Inherent in the sound of a fifth is the very fact that it is indeed a 3:2 sound ratio – a fifth.

9    Discreteness: In language, phonemes can be placed in distinct categories which differentiate them from one another, such as the distinct sound of /p/ versus /b/. This is manifest in music somewhat differently. Where as sound properties dictate the grouping of speech phonemes, it is ratio properties (musical/harmonic context) which dictate the grouping of musical phonemes. For example (in Western music) the pitch G can occur in many different categories depending on its function relative to the other phonemes (notes) being played.

10    Displacement: This refers to the ability to refer to things in space and time and communicate about things that are currently not present. More than just a communicative phenomenon, displacement is a mental capacity for abstract reasoning. There has been some controversy over the extent to which this is a uniquely human phenomenon. Much research and debate concerns the propensity (or not) of ‘Dancing Honey Bees’ to communicate food sources absent from site (Munz, 2005) and whether this is comparable to human displacement. Further research has been conducted which tests the visual displacement capabilities of canines (Fiset & LeBlanc, 2006) and lower primates. To this date there is no evidence to suggest any other animals possess a capacity for displacement, visual or other wise, comparable to that of humans. While the acute neural processes may vary,  this ability (to refer to things absent in space and time) is what musicians do every time they recall a piece of music (either in their mind, on an instrument or vocally), a rhythmic figure or even an interval of two notes. In fact ‘perfect pitch’ (the ability to recognise and recall the note names of various frequencies) may not be purely a memory related phenomenon but also an advanced display of displacement.

11    Productivity: The ability to create new and unique meanings of utterances from previously existing utterances and sounds. It is difficult to ascertain whether the musical language is a ‘productive’ one. While perception may enable individuals to draw their own conclusions from musical statements, the physical laws involved in the production of sound which govern it’s syntax/phonology dictate that it can’t be modified.

12    Traditional Transmission: This is the idea that human language is not completely innate and acquisition depends in part on the learning of a language. Conversely music is innate and its reception/broadcasting comes naturally to humans. In Musicophilia Oliver Sacks observes,

“There is certainly a universal and unconscious propensity to impose a rhythm even when one hears a series of identical   sounds at constant intervals… we tend to hear the     sound of a digital clock, for example, as “tick-tock, tick-tock” – even though it is actually     “tick, tick, tick, tick.”

However engaging in codified music (musical grammar) does require an understanding of the grammar involved.

13. Duality of patterning: In speech phonic segments (phonemes) are combined to make words (morphemes), which in turn are combined again to make sentences. In music the patterning is infinite and multi dimensional. Unlike speech, which must be organised linearly, or rather horizontally,  musical phonemes can be combined horizontally (melody) and vertically (harmony, contrapuntal melody) in the plane of time. These phonic textures will either conform to a perceived fundamental (key signature) or imply new fundamentals. The rendering of these structures in the time-space introduces rhythmic phrasing, the strongest device in musical grammar. While the neurological relationship between rhythm and melody is beyond the scope of this paper, the fact that most dance music (examples to the contrary are often extremely fast or slow in tempo, which creates the illusion of simple ratio) is based on octave ratios (1:1, 2:1, 4:1 etc) and the fifth ratio (3:2) would suggest that our sense of rhythm is also influenced by the harmonic series.

Musical language therefore exhibits all the design features of speech except specialization, arbitrariness and productivity. In other words music is not artificially created, is not an unrelated representation of something else and is a consistent, reliable medium. These three properties are of profound importance in any attempt to understand why music has proliferated since the dawn of modern man.

Why did music proliferate despite speech?

To understand why this universal, eternal language of music is so important to humanity, we need to take a brief look at that humanity ‘warts and all’.

The last retrofit in a long line of primates, we jumped off the production line about 4 million years ago, just after Chimpanzees and Gorillas (Diamond, 1998). By this stage the hominid survival guide read something like this; “Hang out with everyone else…maybe then there’s less chance I’ll be eaten!” We spent the next 2.3 million years or so trying to stand up. No mean feat since our skeletons were originally intended for swinging through the jungle and walking on all four limbs. Beginning with the Australopithecus Africanus stage, followed by Homo Habilis and Homo Erectus a new upright skeleton proto-type emerged (Diamond, 1998). Mentally something was happening too. This new ape was trying things out, hitting nuts with rocks and eventually hitting rocks with rocks to fashion stone tools. Homo Erectus’ coup de grace was the use of fire. It was this sub-human super ape (Homo Erectus) which would eventually venture beyond the confines of Africa and evolve into the Homo Sapien proto-type. This is not to say that the modern man had emerged. The earliest skeletal remains of proto Homo Sapiens differ greatly from the skeleton of modern man. In addition the skeletal remains found throughout Africa, Eurasia and East Asia continued to morph in diverging localised sub-sets, the most modern looking of these found in African skeletal remains. The most extensive archaeological finds of this period are in Europe, where the much caricatured Neanderthals lived. As Jared Diamond points out;

“Despite being depicted in innumerable cartoons as apelike brutes living in caves,     Neanderthals had brains slightly larger than our own. They were also the first humans to leave behind strong evidence of burying their dead and caring for their sick.”

Caring as they may have been, the Neanderthals of Europe failed to make one last evolutionary change. In 450, 000 years they produced no art, fashioned no tools and never quite formed entirely modern skeletons. This triad of facts has lead many scientists to believe that Neanderthal man did not posses the primary tool for effective communication, a fully evolved larynx. Unfortunately for the Neanderthals of Europe they weren’t alone. While they went about their business (for 450, 000 years) a people in East Africa were hard at work evolving. Carbon dating has dated items from these East African sites at 50, 000 years old (Diamond, 1998). These items include tools which have dedicated uses and also jewellery. What is incredible is the pace of what was to follow. A new wave of fully modern people swept through Europe and in the space of 10, 000 years dispatched the Neanderthals to the annals of history. These people possessed fully modern skeletons, produced complex tools (harpoon throwers, bow-and-arrows), artwork, statues and musical instruments. Known to us as the Cro-Magnons they demonstrated the kind of sophisticated society only possible with speech (Diamond, 1991). They were still ‘hanging out’ in groups.

It is interesting that with every physiological progression, the intensity of artistic expression has also evolved. It would seem art was of increasing importance to this evolving ape. But why would art proliferate with the dawn of language? If art is expression, is it not counter-intuitive that that birth of speech, which by using arbitrary symbolism can describe almost anything (Hockett, as cited in Corballis, 2008) did not render art in general redundant? In the case of music, most students would describe their art as a form of communication. Again the argument is circular; if music is communication why bother, we have speech for communicating. What was Beethoven communicating in his famous fifth symphony motif? Was he hungry? The key to solving this riddle is to remember that despite all the frills, a human being is still essentially a super ape (Diamond, 1991). We are herd animals and as such crave the support, comfort and protection which only relationship can bring. Phylogenetically, this search for relationship and support evolved into a search for relationship beyond visible nature. In fact the search for supernatural comfort is the most common forum of musical language. However the paradox is this, in becoming ‘super’, this ape has developed faculties which have led to an ever increasing sense of isolation. The very path which brought us here, evolution, has stranded us in mental realities which cannot be truly permeated. In the words of Aldous Huxley, we are each an “island universe.“ (Huxley, 1954)

This is not merely metaphysical jargon. Human memories are defined as either procedural or declarative (Wikipedia contributors, 2009). Procedural memories are those which we store and access unconsciously. For example when performing tasks like playing a musical instrument. Conversely declarative memory comprises those memories which we can recall, such as the lyrics to a song, which we then sing (declare). Within this latter category there are two sub-groups: semantic memories and episodic memories. Episodic memories have not been irrefutably proven to exist in other animals. If they do exist in other animals, it is agreed that they are of little capacity when compared with the human faculty. Just as displacement (visual, aural and conceptual) was crucial in the hominid path to humanity, episodic memory was essential. Without it our forebears would never have out-planned and out-smarted the other species vying for food chain dominance.
Whereas semantic memories represent known facts about the world, episodic memories represent recollections of episodes experienced in the world. For example a semantic memory would be “I went to the beach and swam”, an episodic memory would be the vivid movie in your mind of that beach visit. This is the incredible point of departure between the mental human experience and the tools we possess for communicating that experience. Episodic memory is four dimensional. Using it humans can mentally travel back in time and reference vivid episodes from the past and even create fictional future episodes. This latter ability of humans to mentally travel forward in time is the key to our survival. We are able to extrapolate multiple outcomes of given scenarios and plan ahead. More importantly for our purposes, episodic memory creates within every human a keen awareness of time and with that an awareness of the transitory nature of life itself, mortality (Corballis, 2008).

The Lonely Ape

Music has proliferated despite speech because it is the most effective tool we human beings possess for dissolving the walls of isolation which our own mental faculties project. This isolation is exacerbated, particularly in the developed world, by the manifestation of mental design – social design. In this new social design, the individual has become the conduit of data, channelling the thoughts of a social consciousness at the expense interpersonal connections. It could be argued that the Orwellian nightmare has materialised. Thought is no longer generated by the individual but by the collective. Beyond a sense of isolation, this perpetuates a loss of self. Similarly as ancient man summoned the Gods with song, so does modern man summon the self with song. In a struggle to reclaim a sense of individual purpose modern man has engineered forums for social interaction such as common interest groups (sporting clubs, homing pigeon clubs etc.). A uniquely organic phenomenon is the birth of sub-culture groups – a culture within a culture. Often these subcultures use the application of a common aesthetic to musical grammar, physical appearance and mental disposition or prescribed behavioural patterns. This trend of music oriented subcultures is not a testament to the omnipotent musical syntax, it is however a testament to the deeply personal way in which music enriches the life of the Lonely Ape.

Language of the Soul

The development of Music Therapy (the process of using music to help or maintain a patients’ health) is testament to the depth of musical cognition in all persons. Sufferers of Parkinson’s Disease characteristically suffer from stiff, rigid motor movements and changes in mental activity (Wikipedia contributers, 2009). In extreme cases patients can become completely immobile, ‘transfixed’ even when taken by the hand and guided to another location. In his book, Musicophilia Oliver Sacks describes the case of Rosalie B., a post-encephalitic patient who was completely immobile. She would sit for hours at a time with one finger lightly touching her spectacles:

“If one walked her down the hallway she would walk in a passive, wooden way, with her finger still stuck to her spectacles. But she was very musical, and loved to play the piano. As soon as she sat down on the piano bench, her stuck hand came down to the keyboard, and she would play with ease and fluency….Music liberated her from her Parkinsonism for a time – and not only playing music, but imagining it. Rosalie knew all of Chopin by heart, and we had only to say “Opus 49” to see her whole body, posture, and expression change, her Parkinsonism vanishing as the F-minor Fantasie played itself in her mind. Her EEG, too, would become normal at such times.”

Music is the most penetrating medium humans possess. As infants we can hear our mothers’ heart beat (not to mention sound vibrations of our mothers’ environment) from 30 weeks of age. Perhaps this is why music is so important in the treatment of mental disease. Somewhere deep in the recess of their minds, those suffering patients have not forgotten that language of nature, a language as old as the universe, a language which shaped the very minds which study it, and a language, which in the age of the palm sized super computer, remains our most reliable conduit of the human soul.



Bannan, N 1999, ‘Out of Africa: the evolution of the human capacity for music’, International Journal of Music Education, vol. 33, pp. 3-9.

Bernstein, L 1976, The Unanswered Question : Six Talks at Harvard / Leonard Bernstein, Charles Eliot Norton lectures ; 1973, Harvard University Press.

Corballis, MC 2008, ‘Mental time travel and the shaping of language’, Experimental Brain Research, vol. Volume 192, no. 3.

Darwin, C 1968, On The Origin of Species (by Means of Natural Selection or The Preservation of Favoured Races in the Struggle for Life) Penguin Books, London.

Diamond, J 1998, Guns, Germs and Steel: a short history of everybody for the last 13,000 years, Vintage, London.

Diamond, J 1991, The Third Chimpanzee: The Evolution and Future of the Human Animal, Hutchinson Radius, London.

Dimond, J 2008, Theory of Music: Golden Section, accessed 07/05/2009, from <http://www.jonathandimond.com/tafe/documents/Intro%20to%20Golden%20Section.pdf>

Fiset, S & LeBlanc, Ve 2006, ‘Invisible displacement understanding in domestic dogs (Canis familiaris): the role of visual cues in search behavior’, Animal Cognition, vol. 10, no. 2.

Frazer, PA 2008, Physical Acoustics of Tuning Systems, accessed 01/05/2009 2009, from <http://www.midicode.com/tunings/acoustics.shtml#1.2>

Huxley, A 1954, The Doors of Perception, Chatto and Windus, London.

Munz, T 2005, ‘The Bee Battles: Karl von Frisch, Adrian Wenner and the Honey
Bee Dance Language Controversy’, Journal of the History of Biology, vol. 2005, no. 38, pp. 535-70.

Sacks, O 2008, Musicophilia: Tales of Music and the Brain, Picador, London.

Schmidt-Jones, C 2009, Harmonic Series II: Harmonics, Intervals, and Instruments, accessed 14/05/2009 2009, from <http://cnx.org/content/m13686/1.6/>

Wikipedia contributors 2009, Charles F. Hockett, Wikipedia, accessed 20/04/2009 2009, <http://en.wikipedia.org/wiki/Charles_Hockett>

Wikipedia 2009, Episodic Memory, WIkipedia, accessed 03/05/2009 2009, <http://en.wikipedia.org/wiki/Episodic_memory>

Wikipedia 2009, Parkinson’s Disease, accessed 16/05/2009 2009, <http://en.wikipedia.org/wiki/Parkinson%27s_disease>

Wikipedia contributors 2009, Harmonic series (music), Wikipedia, accessed 01/05/2009 2009, from <http://en.wikipedia.org/wiki/Harmonic_series_(music)>

Wikipedia contributors 2009, Pitch (music), accessed 02/05/2009 2009, from <http://en.wikipedia.org/wiki/Pitch_(music)>