The their predecessor based on the current

The Evolution of Human Language        Charles Darwin’s introduction of the comparative method through his observations based on regional similarities of species has culminated into the belief that species do not retain fixed characteristics; instead, they evolve from their predecessor based on the current environment. Subsequently, Darwin deduced this into a process known as natural selection, where species that have successfully acclimated according to their natural habitat flourished, while those that failed to reproduce and adapt diminished. Therefore, in order to examine the evolution of human language, we must take into account the evidence derived by means of this principle.Evolutionary physical characteristics and behaviors can only be studied through living species, hence, alternatives such as conjectures regarding the preceded habits of extinct hominids or our ancestors 150,000 years ago will not suffice. Therefore, researches of the physiology, brains, and behavior of chimpanzees, other species, and homo sapiens administer the appropriate database. From this, conclusions are drawn that human linguistics skill appears to involve quantitative increases of primordial capacities conspicuous in living nonhuman primates along with other species, in addition to derived features that must have progressed since the divergence of hominids from a common ancestral species to humans and apes. Thus, evolutionary biologists adopted the term primitive feature to specify attributes that are able to be “traced back to the common ancestor” of analogous species (Lieberman 3). For example, human fingers can be traced back in time to the common ancestors of both present-day frogs and homo sapiens. Whereas, derived features such as a horse’s hooves distinguish particular species.Comparative studies between apes and humans indicate that they both share the primitive linguistic features of lexical ability— the proficiency to “communicate by means of words using various nonvocal signals” (Lieberman 3). The conception of a word requires a mental connection between an arbitrary signal that have no basic meaning with ideas established in life. However, the derived feature that discerns human language from that of the apes is vocal speech. Although vocal signals of  other species are associated with certain emotional states, humans are able to engender arbitrary sounds that indicate words. Based on recent neurobiological studies, human neural mechanisms regarding speech are also involved in the comprehension of syntax and the development of thoughts. In other words, human speech, intricate syntax structure, and intellectual thoughts may have simultaneously evolved from the primitive features. With regards to this, Charles Darwin’s approach to evolutionary biology laid the foundation to understanding the neural infrastructure of human language and perception.Furthermore, studies of human brains and behaviors prove, beyond reasonable doubt, that primitive subcortical structures in the brain such as the basal ganglia— traditionally responsible for motor skills— aid neuronal circuits controlling the comprehension of thoughts and sentences (Lieberman 4). The basal ganglia is active in frogs where they generate suitable motor responses to sensory information. Further comparative studies present a trend from the regulation of biological imperative to distinctive behavior as we “advance up the phylogenetic scale” (Lieberman 4). Nevertheless, though the basal ganglia regulate inherent behaviors in rodents and primates, they are fundamentally involved in the process of acquiring and regulating learned activities.This evolutionary history of the relationship between species are apparent when we analyze the primitive characteristics they share with related species through its evolution, and the derived characteristics that differentiate it from those related species. Which proposes that language involves various biological elements, some of which are components that have deep roots when we examine the brains and behavior of other related living species. Lexical ability and comprehension of simple syntax can be seen in living apes, who most likely retained some characteristics of its extinct ancestor common to present-day apes and homo sapiens. However, the distinction of lexical capacities between humans and apes “is more quantitative than qualitative” (Lieberman 5). For instance, apes can produce approximately 150 words using manual sign language or computer boards, an amount equivalent to the ability of a child between ages 2 and 2.5 years; similarly, chimpanzees have the ability to comprehend sentences in American Sign Language or spoken English that abide general, standard structures, roughly equivalent to abilities of young children. In contrast, humans’ vocabularies typically exceed 10,000 words and sometimes up to 50,000 words. Human language is unique in the sense that we have the ability to create sound sequences that signal words; vocal language is only present today in human beings, which is a derivation of the innate aspects of the human brain. Thus, neurobiological researches of the human brain demonstrate the areas of brain that are responsible for the regulation of speech, are also implicated in the distinction of meanings conveyed through syntax and in other cognitive aspects.                                       Basic Language Comprehension in Infants  A crucial step in language acquisition is the ability to identify what each word are; which requires the ability to evaluate utterances of one word from another, not only this but it also demands the capacity to disregard extraneous differences in the audile characteristics of words produced by a range of speakers and speaking rates. Scientists and researchers have conducted researches on infant speech perception concentrating on young infants’ ability to discriminate differing speech sounds. Researchers and professors have found that infants as young as 1 month old could differentiate “the voicing contrast that distinguishes ba from pa in English” (Jusczyk 62). Subsequently, researchers began to examine to range of speech contrasts young infants are able to discern. The studies confirm that infants’ abilities to detect contrasting speech were not limited to only voicing contrast, but also to “articulation—e.g., b-d and manner of articulation—e.g., r-1 among consonants, and to various vowel distinctions—e.g., i-I” (Jusczyk 63). Furthermore, additional studies show that infants’ phonetic discriminative capabilities appeared to be developed as soon as they are born. The ability to discern a sound pattern from another is crucial for acquisition of a vocabulary. However, it is also fundamental that the learner is able to recognize when different orations are occurences of the same word. Infants at six months old exhibits some ability to distinguish among talkers in the production of similar vowel contrasts. Further researches expanded this finding, establishing that “2-month-olds generalized across talker differences in the production of consonant-vowel-consonant (CVC) syllables, such as “bug” and “dug” (Jusczyk 63). Additionally, infants at this same age group show some ability to compensate for acoustic variability due to changes in speaking rate. Thus, young infants seem to have a primitive ability to compensate for variability of acoustic production of words due to different talkers and varied speaking rates. As a result, young infants’ phonetic distinction capacities and abilities to distinguish across different sources of varied acoustic allow them to categorize those input and develop an understanding of the structural organization of their native language. However, a multitude of other obstacles remain. Many infants grow up in environments that more than one language is spoken daily. Without the means of separating these utterances in one language from the other, it would be nearly impossible for infants to derive a set of generalizations about the structure of their native language. Rather, generalizations of the nature of word and sentence frameworks would be based on discordant and extraneous utterances. However, finding from several investigations proposes that young infants have some capacity to discern utterances in one language from another. Particularly, the Mehler et al., 1988 research found that “French newborns listening to an unfamiliar bilingual talking distinguished utterances in their mother’s native language (French) from those of another language (Russian)” (Jusczyk 63). Also, the research shows that American infants of 2 months old can discriminate English utterances from those in Italian. Moreover, similar results have been reported for newborns and 5 months old infants when listening to Spanish and English. In the original Mehler et al. investigation, researchers found that infants utilized prosodic information in order to distinguish utterances in their native language from a foreign language. Additional and more recent investigations have expanded on these observations. Nazzi, Bertoncini and Mehler stated that French newborns were able to differentiate two foreign languages (English and Japanese). The rhythmic nature of Japanese and English differs: English is considered as stress timed, on the other hand Japanese is considered as more timed. The mora is a rhythmic system that can be syllabic or sub syllabic. An example of a mora in English is roughly equivalent to a “CV consonant-vowel syllable with a short vowel (e.g., “the” as opposed to”thee,” which has a long vowel and thus two moras)” (Jusczyk 64). Nazzi et al. discovered that French newborns did not discriminate utterances from two foreign languages with a similar rhythmic structure (e.g., English and Dutch). Thus, concluding that newborns initially differentiate utterances of varied languages on the foundation of their rhythmic system. Therefore, a crucial question is then presented, what would happen if another language spoken in that same environment has a similar rhythmic organization as the native language being learned? Bosch and Sebastian-Galles examine this issue by observing infants of 4 months old when exposed to Spanish and Catalan. The infants were able to distinguish the utterances of these two languages. Hence, bosch and Sebastian-Galles suggested that the distribution of syllable types and their duration may have provided the foundation for differentiating utterances in Spanish and Catalan. Other recent research implies that differentiation of languages within the same rhythmic system may “require prior experience with at least one of the languages” (Jusczyk 64). Particularly, American infants at 5 months can discriminate English from Dutch, however, not Dutch from German, even though each of the languages are in the same rhythmic organization of stress timed.The crucial conclusion from these studies is that, infants have some capability to organize utterances as belonging to different languages according to their rhythmic system. Furthermore, infants beyond 4 months can use other kinds of information in order to identify utterances of their native language from those of a foreign language with a similar rhythmic structure. Thus, these early speech comprehension capacities provide a basis for initial recognition of utterances that may allow learners to develop patterns for the structure and organization of their language. However, this process must go through extensive development before it can evolve to support the kinds of speech processing in a fluent speaker of a particular language. Hence, speech comprehension capacities are customized to accompany most competently with the certain structure of utterances in the native language. Cerebral Imaging of Auditory Language ProcessesThe ability in most human beings to comprehend and produce speech with ease conceal the complexity of the psychological and neuronal systems that govern this process. In the past decade, notable advancements have been made in the field of understanding neural responses for speech and language processing due to the development of functional brain imaging technologies.One of the most relevant finding from over a century of researches is the “identification of anterior and posterior speech areas in the left cerebral hemisphere of most right-handed individuals” (Zatorre 211). These regions contain the specialization involving the left posterior temporal area for speech comprehension, a section often referred to as Wernicke’s area, and Broca’s area in the left frontal area for the production of speech (Fig. 8.1). Yet despite the numerous studies done, there is little consensus on the exact nature of the elemental computations of these regions. Even more, conventional description of comprehension deficits following an injury to the posterior speech area does not necessarily find consistency with “aphasia literature (see, e.g., Blumstein, 1994, for evidence that anterior lesions cause disturbances in speech perception),” nor is it validated through “cortical stimulation studies, (Ojemann, 1991; Penfield & Roberts, 1959),” which report comprehension difficulties from stimulation of both the anterior and posterior speech zones (Zatorre 212).Moreover, debates about the anatomical boundaries of the posterior temporal speech area continue to arise, as evidence suggests that the area extend beyond the establishes Wernicke area, and to include additional inferior tissue in the middle temporal gyrus. The uncertainty mainly stems from the uncontrolled nature of brain contusions that causes aphasic disturbance, which vary widely between each individuals. Also, other problems result from the complexity of speech comprehension itself, and the abundant integral processes that may be involved in simple tasks such as recognizing a basic command. These topics have caused some to question the accuracy of the standard neuroanatomical model, thus, leading to a consensus that it is in need of additional studies and thorough revision.  Functional imaging allow the visualization of hemodynamic, dealing with the forces involved in the circulation of blood, and physiological movement of the brain as the subject performs a behavioral or cognitive task. These techniques, positron emission tomography PET and functional magnetic resonance imaging fMRI, are vital tools to study many cognitive processes and enhance the knowledge about the functional anatomy of the brain, especially dealing with language. Though imaging methods are unable to provide explanations to all the questions raised about the brain and faces many limitations. Nevertheless, there exists a general consensus in the scientific field that the understanding of various human processes on the neuronal level such as language has been augmented substantially by the progress in functional imaging. As these techniques continue to advance, a complete understanding to the complex processes as language will require interdisciplinary knowledge from other scientific domains, as well as psycholinguistics and traditional methods of lesion analysis.In the 1990s, Dr. Robert J. Zatorre at the Montreal Neurological Institute and his research team began studying speech comprehension through PET, a technique that measures cerebral blood flow (CBF) which indicates neuronal activity. The initial studies addressed the processing of speech in relation to the functional dissociation of primary from secondary auditory cortex. These cortical regions are distinct based on their connective pattern and cytoarchitecture, however, the studies were done in order to determine if these areas could be discernible based on functional properties. Brain imaging studies such as the Binder et al., 1996; Peterson et al., 1991; Wise et al., 1991; had indicated a bilateral increase in CBF in the STG with the introduction of speech material as compared to the baseline study with no auditory stimulant, with some implication of a left-sided bias. These experimental datas suggest that the audile cues present in speech sounds employ neural structures from both the left and right STG, however, there is no direct clarification to what extent these areas are specialized for speech processing intrinsically, as opposed to more general classifications of stimuli.The second question