When humans speak, there are various types of sound frequencies that travel throughout the air and are manipulated by various factors such as the size of a human, their vocal organs, other outside conditions and variables. When producing sound and speech, humans use their vocal organs and muscles as instruments by varying the amounts of volume, timing, pitch, and timbre that are used in each attempt (Minematsu, N., n.d., p. 1). It can be stated that the sounds come from spectrum modulations.
In the area of telecommunications, content is usually received and interpreted through demodulation (Minematsu, N., n.d., p. 1). Demodulation can be defined as the process of completely extracting original content from modulated carrier waves. This overall concept of speech recognition and production can be described at the Modulation-Demodulation Model of Speech Communication. In this model, the tongue is seen as a “flexible modulator” of the spectrum (Minematsu, N., n.d., p. 2). Also in this model, it is suggested that good demodulators only exist in human organs that are used in audition and cannot usually be found in animals (Minematsu, N., n.d., p. 2).
From a cognitive level, this model can also address questions such as “What are infants imitating when they are mimicking their parents?” and how infants acquire the abilities to do so. Often, human infants can acquire spoken language skills and vocabulary by performing vocal imitations of the sounds that their parents make (Minematsu, N., n.d., p. 3). In an attempt to answer the aforementioned question, it has been said that the infants are not solely mimicking their parents’ voices but are actually extracting speaker independent speech patterns called Gestalts (Minematsu, N., n.d., p. 3). Human infants come to discover and understand these patterns by using their mouths.
While infants have no way to be aware of this, the cerebellum and basal ganglia have also been shown to become activated during various reading and language tasks in multiple studies (Booth et al., 2006). It has been proposed many times that the basal ganglia is involved with the cortical patters of activation for behaviors and thoughts (Booth et al., 2006). Other studies also found that the basal ganglia is also involved in language processing functions and could be involved in the systems that turn phonemes into actual words.
In regards to the Modulation-Demodulation Model, the basal ganglia could be the bridge the connects raw frequencies to coherently received content and information for listeners. If the basal ganglia were to be concretely proven to be a key part of the brain that assists with the processing of language, it could explain how infants learn languages through mimicking parents and how humans can understand information that is being presented to them in different volumes, timbres, and pitches. Combining this processing and decoding capability with already present social norms could also explain how speakers and listeners confirm that their speech was presently and received correctly by whomever they are communicating with.
All in all, the Modulation-Demodulation Model of Speech Communication relies on the demodulation of modulated sound waves when any number of speakers are attempting to communicate with each other. The demodulation could also be assisted by the basal ganglia as this part of the brain has been shown in some studies to aid in the processing of language and the turning of phonemes into full words. This speech model also has an explanation for things such as infant language acquisition and which organs and muscles in the match have very integral parts in helping humans produce sounds and speech.