Skip to main content

10 Neuroscience of Comprehension


hat is happening inside my head when I listen to a sentence? How do I process written words?

Our brain is not a black box any more - in this chapter, we will take a closer look at processes of the brain concerned with language comprehension. Dealing with natural language understanding, we distinguish between the neuroscientific and the psycholinguistic approach. As text understanding spreads through the broad field of cognitive psychology, linguistics, and neurosciences, our main focus will lay on the intersection of two latter, which is known as neurolinguistics.

Different brain areas need to be examined in order to find out how words and sentences are being processed. Since there are only limited possibilities of acquiring knowledge about brain states under natural conditions, we are restricted to draw conclusions from certain brain lesions to the functions of corresponding brain areas.

Hence, much research is done on those brain lesions that cause speech processing deficiencies (aphasiae). One of the most famous examples is Broca's aphasia, which is caused by a lesion of the frontal brain region. A typical symptom is telegraphese speech without grammar, while speech understanding is mainly unaffected. In contrast, persons suffering from Wernickes aphasia, an also well-known disruption of a more posterior brain region, speak very fluently, using neologisms (building of new words not belonging to the language) as well as phonematic and semantic paraphrasiae (substitution of a phoneme of the word by another one) but have a heavily impaired language understanding.

To examine these brain lesions, techniques for brain imaging and ERP-measurement have been established during the last 40 years. These devices contribute to the acquired data more and more adequate: The possibly best established brain imaging technique is the so called PET (positron emission tomography) scan, which was introduced in the 1970s. It maps the bloodflow in brain areas of interest onto colored “brain-activity-pictures” by measuring the activity of a radioactive tracer injected into the living person’s blood. An analogous principle we find at fMRI-scans (functional magnetic resonance imaging). This (newer) method of brain imaging works without the use of radioactivity and is therefore non-intrusive and less injurious. Therefore it can be applied on a patient repetitivley as it might be needed for follow up studies. This technique works with the blood's substances responding to magnetic fields. EEG (Electroencephalography) records the electrical activity in the brain by placing electrodes on the scalp. Unlike PET and fMRI, it does not produce images, but istead waves which precisely show the size of activity for a given stimuli.

Scientific studies on these phenomena are generally divided into research on auditory and visual language comprehension; we will discuss both and have a glance at their differences and similarities. And not to forget is that it is not enough to examine English: To understand language processing in general, we have to look at non-Indoeuropean and other language systems like sign language also.

Today there are several theories about the roles of different brain domains in language understanding. For there is still a lot to do in this exciting field of current research.

Lateralization of language

Lateralization of language has frequently been ascribed to a specific side of the brain. There is a lot of evidence that each brain hemisphere has its own distinct functions. Most often, the right hemisphere is referred to as the non-dominant hemisphere and the left is seen as the dominant hemisphere. This has lead to the assumption that the right side of the brain may only be important for receiving sensory information from the left side and for controlling motor movement of the left half of the body. Yet, though it appears that it does not possess many language abilities, but is rather necessary for spatial tasks and non-verbal processing, this does not mean that both brain halves do not work together, in order to achieve maximal function. On the contrary, if interhemispheric transport is not hindered in some way or another, both halves can effectively interact with one another.

Anatomical differences between left and right hemisphere

Initially we will consider the most apparent part of a differentiation between left and right hemisphere: Their differences in shape and structure. As visible to the naked eye there exists a clear asymmetry between the two halves of the human brain: The right hemisphere typically has a bigger, wider and farther extended frontal region than the left hemisphere, whereas the left hemisphere is bigger, wider and extends farther in it’s occipital (backward) region (M. T. Banich,“Neuropsychology“, ch.3, pg.92). Significantly larger on the left side in most human brains is a certain part of the temporal lobe’s surface, which is called the planum temporale. It is localized near Wernicke’s area and other auditory association areas, wherefore we can already speculate that the left hemisphere might be stronger involved in processes of language and speech treatment. In fact such a leftlaterality of language functions is evident in 97% of the population (D. Purves, „Neuroscience“, ch.26, pg.649). But actually the percentage of human brains, in which a „left-dominance“ of the planum temporale is traceable, is only 67% (D. Purves, „Neuroscience“, ch.26, pg.648). Which other factors play a role here, and lead to this high amount of human brains in which language is lateralized, is simply not clear.

Functional asymmetry

A rarely performed but popular surgical method to reduce the frequency of epileptic seizures in hard cases of epilepsy is the so called corpus callosotomy. Here a radical cut through the connecting „communication bridge“ between right and left hemisphere, the corpus callosum, is done; the result is a „split-brain“. For patients whose corpus callosum is cut, the risk of accidental physical injury is mitigated, but the side-effect is striking: Due to this eradicative transection of left and right half, these two are not longer able to communicate adequately. They function for their owns, separated and disjoined. This situation provides the opportunity to study the functionality of the two hemispheres independently. First experiments with split-brain patients were performed by Roger Sperry and his colleagues at the California Institute of Technology in 1960 and 1970 (D. Purves, „Neuroscience“, ch.26, pg.646). They lead researchers to sweeping conclusions about laterality of speech and the organization of the human brain in general.

In split-brain experiments it is typically made use of the laterality of the visual system: A visual stimulus, located within the left visual field, projects onto the nasal (inner) part of the left eye’s retina and onto the temporal (outer) part of the right eye’s retina. As images on the temporal retinal region are to be processed in the visual cortex of the same side of the brain (ipsilateral), whereas nasal retinal information is mapped onto the opposite half of the brain (contralateral), the stimulus within the left visual field will completely arrive in the right visual cortex to be processed and worked up. In „healthy“ brains this information furthermore attains the left hemisphere via the corpus callosum and can be integrated there. In split-brain patient’s brains this current of signals is interrupted; the stimulus remains „invisible“ for the left hemisphere.

Now in such an experiment a visual stimulus is often produced for only one half of the brain (that is, within one –the opposite- half of the visual field), while the participant is instructed to name the seen object, and to blindly pick it out of an amount of concrete objects with the contralateral hand. It can be shown that a picture, for example the drawing of a die, which was only been presented to the left hemisphere, can be namend by the participant („I saw a die“), but is not selectable with the right hand (no idea which object to choose from the table). Contrarily the participant is unable to name the die, if it was seen by the right hemisphere, but easily picks it out of the heap of objects on the table with the help of the left hand.

These outcomes are clear evidence of the human brain’s functional asymmetry. The left hemisphere seems to dominate functions of speech and language processing, but is unable to handle spatial tasks like vision-independent object recognition. The right hemisphere seems to dominate spatial functions, but is unable to process words and meaning. In a second experiment it can be shown that a split-brain patient can only follow a written command (like „get up now!“), if it is presented to the left hemisphere. The right hemisphere can only „understand“ pictorial instructions. The following table (D. Purves, „Neuroscience“, ch.26, pg.647) gives a distinction of functions:

Left Hemisphere

Right Hemisphere

• analysis of right visual field

• language processing

• writing

• speech

• analysis of left visual field

• spatial tasks

• visuospatial tasks

• object and face recognition

It is important to keep in mind that these distinctions comprise only functional dominances, no exclusive competences. In cases of unilateral braindamage, often one half of the brain takes over tasks of the other one; full effectiveness of the two hemispheres is only reached during constructive interaction of both. So, it would be a fallacy to conclude the right hemisphere to have completely no influence on speech and language processing. One of the next sections will go into this point.

Cognitive functioning is most often ascribed to the right hemisphere of the brain. When damage is done to the this part of the brain or when temporal regions of the right hemisphere are removed, this can lead to cognitive-communication problems, such as impaired memory, attention problems, and poor reasoning (L. Cherney, 2001). Investigations lead to the conclusion that the right hemisphere processes information in a gestalt and holistic fashion, with a special emphasis on spatial relationships. Here, an advantage arises for differentiating two distinct faces because it examines things in a global manner and it also reacts to lower spatial, and also auditory, frequency. The former point can be undermined with the fact that the right hemisphere is capable of reading most concrete words and can make simple grammatical comparisons (M. T. Banich,“Neuropsychology“, ch.3, pg.97). But in order to function in such a way, there must be some sort of communication between the brain halves. Since 1990, research suggests that the hemispheres do not have a single way of interacting with each other but can do so in a variety of ways. The corpus callosum, as well as some subcortical comissures serve for interhemispheric transfer. Both can simultaneously contribute to performance, since they use complement roles in processing.


An important issue, when exploring the different brain organization, is handedness, which is the tendency to use the left or the right hand to perform activities. Throughout history, left-handers, which only comprise about 10% of the population, have often been considered being something abnormal. They were said to be evil, stubborn, defiant and were, even until the mid 20th century, forced to write with their right hand. An example that shows their initial position in society is the latin word sinistra, which means left, as well as unlucky, or an indian tradition, where the left hand is reserved for bathroom functions (M.T.Banich, "Neuropsychology", ch.3, pg. 117). There are many negative connotations associated with the phrase "being left-handed", e.g. being clumsy, awkward, insincere, malicious, etc.

One most commonly accepted idea, as to how handedness affects the hemispheres, is the brain hemisphere division of labor. Since both speaking and handiwork require fine motor skills, its presumption is that it would be more efficient to have one brain hemisphere do both, rather than having it divided up. Since in most people, the left side of the brain controls speaking, right-handedness predominates. The theory also predicts that left-handed people have a reversed brain division of labor.

In right handers, verbal processing is mostly done in the left hemisphere, whereas visuospatial processing is mostly done in the opposite hemisphere. Therefore, 95% of speech output is controlled by the left brain hemisphere, whereas only 5% of individuals control speech output in their right hemisphere. Left-handers, on the other hand, have a heterogeneous brain organization. Their brain hemisphere is either organzied in the same way as right handers, the opposite way, or even such that both hemispheres are used for verbal processing. But usually, in 70% of the cases, speech is controlled by the left-hemisphere, 15% by the right and 15% by either hemisphere. When the average is taken across all types of left-handedness, it appears that left-handers are less lateralized.

After, for example, damage occurs to the left hemisphere, it follows that there is a visuospatial deficit, which is usually more severe in left-handers than in right-handers. Dissimilarities may derive, in part, from differences in brain morphology, which concludes from asymmetries in the planum temporale. Still, it can be assumed that left-handers have less division of labor between their two hemispheres than right-handers do and are more likely to lack neuroanatomical asymmetries (M.T.Banich, "Neuropsychology", ch.3, pg. 123).

There have been many theories as to find out why people are left-handed and what its consequences may be. Some people say that left-handers have a shorter life span or higher accident rates or autoimmune disorders. According to the theory of Geschwind and Galaburda, there is a relation to sex hormones, the immune system, and profiles of cognitive abilities that determine, whether a person is left-handed or not. Concludingly, many genetic models have been proposed, yet the causes and consequences still remain a mystery (M.T.Banich, "Neuropsychology", ch.3, pg. 119).

Auditory Language Processing

For understanding how language is organized neurologically and what its fundamental components are, brain lesions, namely aphasiae, are examined. We will first consider the neurological perspective of work with aphasia and then turn to the psychological perspective later on in the chapter.

Neurological Perspective Broca's and Wernicke's area

One of the most well-known aphasiae is Broca`s aphasia, that causes patients to be unable to speak fluently, but rather have a great difficulty producing words. Comprehension, however, is relatively intact in those patients. Because these symptoms do not result from motoric problems of the vocal musculature, a region in the brain that is responsible for linguistic output must be lesioned. Broca found out that the brain region causing fluent speech is responsible for linguistic output, must be located ventrally in the frontal lobe, anterior to the motor strip. Recent research suggested that Broca`s aphasia results also from subcortical tissue and white matter and not only cortical tissue.

Another very famous aphasia, known as Wernicke`s aphasia, causes the opposite syndromes as described above. Patients suffering from Wernicke`s aphasia usually speak very fluently, words are pronounced correctly, but they are combined senselessly – “word salad” is the way it is most often described. Understanding what patients of Wernicke`s aphasia say is especially difficult, because they use paraphasiae (subsitution of a word in verbal paraphasia, of word with similar meaning in semantic paraphasia, and of a phoneme phonemic paraphasia) and neologisms.With Wernicke`s aphasia comprehending simple sentences is a very difficult task. Thus their ability to process auditory language input but also written language is impaired. Concluding from this, one can say that the area that causes Wernicke`s aphasia, is situated at the joint of temporal, parietal and occipital regions, near Heschl`s gyrus (primary auditory area), because all the areas receiving and interpreting sensory information (posterior cortex), and those connecting the sensory information to meaning (parietal lobe) are likely to be involved.

Wernicke did not only detect the brain region responsible for comprehension, but also concluded that with an impairment of the brain region betwenn Wernicke`s and Broca`s area, speech could still be comprehended and produced, but repeating just heard sentences could not be possible, because the input received could not be conducted forwarded to Broca`s area to be reproduced. Thus the damage in this part of the brain is called conduction aphasia. Research has shown that damage to a large never fibre tract, the arcuate fasciculus, the connection between the two intact brain regions, causes this kind of aphasia. That is why conduction aphasia is also regarded as a disconnection syndrome ( behavioural dysfunction because of a damage to the connection of two connected brain regions).

Transcortical motor aphasia, another brain lesion caused by a connection disruption, is very similar to Broca`s aphasia, with the difference that the ability to repeat is kept. In fact people with a transcortical motor aphasia often suffer from echolalia, the need to repeat what they just heard. Usually patients` brain is damaged outside Broca`s area, sometimes more anterior and sometimes more superior. Individuals with transcortical sensory aphasia have similar symptoms are those suffering from Wernicke`s aphasia, except that they show signs of echolalia.

Lesions in great parts of the left hemisphere lead to global aphasia, and thus to an inability of both comprehending and producing language, because not only Broca`s or Wenicke`s area is damaged. (Barnich, 1997, pp.276-282)

Type of Aphasia

Spontaneous Speech





• Broca`s

• Wernicke`s

• Conduction

• Transcortica l motor

• Transcortica l sensory

• Global

• Nonfluen t

• Fluent

• Fluent

• Nonfluen t

• Fluent

• Nonfluen t

• Uncommo n

• Common (verbal)

• Common (literal)

• Uncommo n

• Common

• Variable

• Good

• Poor

• Good

• Good

• Poor

• Poor

• Poor

• Poor

• Poor

• Good (echolali a)

• Good (echolali a)

• Poor

• Poor

• Poor

• Poor

• Poor

• Poor

• Poor

(Adapted from Benson, 1985,p.32 as cited in Barnich, 1997, p.287)

Psychological Perspective

Examining from the psychological perspective, brain lesions are used to understand which parts of the brain play roles for the linguistic features phonology, syntax and semantics.


Examining which parts are responsible for phonetic representation, patients with Broca`s or Wernicke`s aphasia can be compared. As the speech characteristic for patients with Broca`s aphasia is non-fluent, i.e. they have problems prodcuing the correct phonetic and phonemic representation of a sound, and people with Wernicke`s aphasia do not show any problems speaking fluently, but also have problems producing the right phoneme. This indicates that Broca`s area is mainly involved in phonological production and also, that phonemic and phonetic representation do not take place in the same part of the brain. Scientists examined on a more precise level of speech production, on the level of the distinctive features of phonemes, to see in which features patients with aphasia made mistakes. Results show that in fluent as well as in non-fluent aphasia patients usually mix up only one distinctive feature, not two. In general it can be said that errors connected to the place of articulation are more common than those linked to voicing. Interestingly some aphasia patients are well aware of the different features of two Phoneme phonemes, yet they are unable to produce the right sound. This suggests that though patients have great difficulty pronouncing words correctly, their comprehension of words is still quite good. This is characteristic for patients with Broca`s aphasia, while those with Wernicke`s aphasia show contrary symptoms: they are able to pronounce words correctly, but cannot understand what the words mean. That is why they often utter phonologically correct words (neologisms), that are not real words with a meaning.


Humans in general usually know the syntax of their mother tongue and thus slip their tongue if a word happens to be out of order in a sentence. People with aphasia, however, often have problems with parsing of sentences, not only with respect to the production of language but also with respect to comprehension of sentences. Patiens showing an inability of comprehension and production of sentences usually have some kind of anterior aphasia, also called agrammatical aphasia. This can be revealed in tests with sentences, these patients cannot distinguish between active and passive voice easily if both agent and object could play an active part. For example patients do not see a difference between “The boy saw the girl” and “The girl was seen by the boy”, but they do understand both “The boy saw the apple” and “The apple was seen by the boy”, because they can seek help of semantics and do not have to rely on syntax alone. Patients with posterior aphasia, like for example Wernicke`s aphasia, do not show these symptoms, as their speech is fluent. Comprehension by mere syntactic means would be possible as well, but the semantic aspect must be considered as well.


It has been shown that patients suffering from posterior aphasia have severe problems understanding simple texts, although their knowledge of syntax is intact. The semantic shortcoming is often examined by a Token Test, a test in which patients have to point to objects referred to in simple sentences. As might have been guessed, people with anterior aphasia have no problems with semantics, yet they might not be able to understand longer sentences because the knowledge of syntax then is involved as well.

In general studies with lesioned people have shown that anterior areas are needed for speech output and posterior regions for speech comprehension. As mentioned above anterior regions are also more important for syntactic processing, while posterior regions are involved in semantic processing. But such a strict division of the parts of the brain and their responsibilities is not possible, because posterior regions must be important for more than just sentence comprehension, as patients with lesions in this area can neither comprehend nor produce any speech. On the whole these studies have revealed that the human brain is divided into two subsystems, one more important for comprehension, the other for production.(Barnich,1997, pp.283-293)

Visual Language Processing

The question whether there is one processing unit in the brain for language as a whole is mostly answered by “no”. Reading and writing respectively rely on vision whereas spoken language is first mediated by the auditory system. Language systems responsible for written language processing have to interact with a sensory system different from the one involved in spoken language processing.

Visual language processing in general begins when the visual forms of letters (“c” or “C” or “c”) are mapped onto abstract letter identities. These are then mapped onto a word form and the corresponding semantic representation (the “meaning” of the word, i.e. the concept behind it). Observations of patients that lost a language ability due to a brain damage led to different disease patterns that indicated a difference between perception (reading) and production (writing) of visual language just like it is found in non-visual language processing.

Alexic patients possess the ability to write while not being able to read whereas patients with agraphia are able to read but cannot write. Though alexia and agraphia often occur together as a result of damage to the angular gyrus, there were patients found having alexia without agraphia (e.g.

Greenblatt 1973, as cited in M. T. Banich,“Neuropsychology“, p. 296) or having agraphia without alexia (e.g. Hécaen & Kremin, 1976, as cited in M. T. Banich,“Neuropsychology“, p.296). This is a double dissociation that suggests separate neural control systems for reading and writing.

But even language production and perception respectively are thought to subdivide into separate neural circuits, since double dissociations found in phonological and surface dyslexia suggest so called direct and phonological routes.

The phonological route

In essence, the phonological route means making use of grapheme-to-phoneme rules. Grapheme-to-phoneme rules are a way of determining the phonological representation for a given grapheme. A grapheme is the smallest written unit of a word (e.g. “sh” in “shore”) that represents a phoneme. A phoneme on the other hand is the smallest phonological unit of a word distinguishing it from another word that otherwise sounds the same (e.g. “bat” and “cat”). People learning to read often use the phonological route to arrive at a meaning representation. They construct phonemes for each grapheme and then combine the individual phonemes to a sound pattern that is associated with a certain meaning. An example would be:

h a t -+ /h/ / a / /t/ -+ “hat”

individual graphemes ^ phonological representation ^ meaning representation

The direct route

The direct route is supposed to work without an intermediary phonological representation, so that print is directly associated with word-meaning. A situation in which the direct route has to be taken is when reading an irregular word like “colonel”. Application of grapheme-to-phoneme rules would lead to an incorrect phonological representation.

According to Taft (1982, as referred to in M. T. Banich,“Neuropsychology“, p.297) and others, the phonological route is used by people who are learning to read or by skilled readers when encountering unknown words. The direct route is supposed to be faster since it does not make use of a “phonological detour” and is therefore said to be used for known words. However, this is just one point of view and others, like Chastain (1987, as referred to in M. T. Banich,“Neuropsychology“, p.297), postulate a reliance on the phonological route even in skilled readers.

The processing of written language in reading

Several kinds of alexia could be differentiated, often depending on whether the phonological or the direct route was impaired. Patients with brain lesions participated in experiments where they had to read out words and non-words as well as irregular words. Reading of non-words for example requires access to the phonological route since there cannot be a “stored” meaning or a sound representation for this combination of letters.

Patients with a lesion in temporal structures of the left hemisphere (the exact location varies) suffer from so called surface alexia. They show the following characteristic symptoms that suggest a strong reliance on the phonological route: Very common are regularity effects, that is a mispronunciation of
words in which the spelling is irregular like “colonel” or “yacht”. These words are pronounced according to grapheme-to-phoneme rules, although high-frequency irregularly spelled words may be preserved in some cases. Furthermore, the would-be pronunciation of a word is reflected in reading-comprehension errors. When asked to describe the meaning of the word “bear”, people suffering from surface alexia would answer something like “a beverage” because the resulting sound pattern of “bear” was the same for these people as that for “beer”. This characteristic goes along with a tendency to confuse homophones (words that sound the same but are spelled differently and have different meanings associated). However, these people are still able to read non-words with a regular spelling since they can apply grapheme-to-phoneme rules to them.

In contrast, phonological alexia is characterised by a disruption in the phonological route due to lesions in more posterior temporal structures of the left hemisphere. Patients can read familiar regular and irregular words by making use of stored information about the meaning associated with that particular visual form (so there is no regularity effect like in surface alexia). However, they are unable to process unknown words or non-words. Word class effects and morphological errors are common. Nouns, for example, are read better than function words and sometimes even better than verbs. Affixes which do not change the grammatical class or meaning of a word (inflectional affixes) are often substituted (e.g. “farmer” instead of “farming”). Furthermore, concrete words are read with a lower error rate than abstract ones like “freedom” (concreteness effect).

Deep Alexia shares many symptomatic features with phonological alexia such as an inability to read out non-words. Just as in phonological alexia, patients make mistakes on word inflections as well as function words and show visually based errors on abstract words (“desire” → “desert”). In addition to that, people with deep alexia misread words as different words with a strongly related meaning (“woods” instead of “forest”), a phenomenon referred to as semantic paralexia. Coltheart (as referred to in the “Handbook of Neurolinguistics”,ch.41-3, p.563) postulates that reading in deep dyslexia is mediated by the right hemisphere. He suggests that when large lesions affecting language abilities other than reading prevent access to the left hemisphere, the right-hemispheric language store is used. Lexical entries stored there are accessed and used as input to left-hemisphere output systems.

The processing of written language in spelling

Just like in reading, two separate routes –a phonological and a direct route- are thought to exist. The phonological route is supposed to make use of phoneme-to-grapheme rules while the direct route links thought to writing without an intermediary phonetic representation. It should be noted here that there is a difference between phoneme-to-grapheme rules (used for spelling) and grapheme-to-phoneme rules in that one is not simply the reverse of the other. In case of the grapheme “k” the most common phoneme for it is /k/. The most common grapheme for the phoneme /k/, however, is “c”.

Phonological agraphia is caused by a lesion in the left supramarginal gyrus, which is located in the parietal lobe above the posterior section of the Sylvian fissure (M. T. Banich,“Neuropsychology“,p.299). The ability to write regular and irregular words is preserved while the ability to write non-words is not. This, together with a poor retrieval of affixes (which are not stored lexically), indicates an inability to associate spoken words with their orthographic form via phoneme-to-grapheme rules. Patients rely on the direct route, which means that they use orthographic word-form representations that are stored in lexical memory

Lesions at the conjunction of the posterior parietal lobe and the parieto-occipital junction cause so called lexical agraphia that is sometimes also referred to as surface agraphia. As the name already
indicates, it parallels surface alexia in that patients have difficulty to access lexical-orthographic representations of words. Lexical agraphia is characterised by a poor spelling of irregular words but good spelling for regular and non-words. When asked to spell irregular words, patients often commit regularization errors, so that the word is spelled phonologically correct (for example, “whisk” would be written as “wisque”).

Evidence from Advanced Neuroscience Methods

In the past, most of the data in neurolinguistic experiments came from patients with brain lesions that caused disabling of particular linguistic functions. By evaluating which damage caused what kind of dysfunction, researchers could sketch a map of the different brain regions involved in language processing.

Measuring the functions of both normal and damaged brains has been possible since the 1970s, when the first brain imaging techniques were developed. With them, we are able to “watch the brain working” while the subject is e.g. listening to a joke. These methods (further described in chapter 4) show whether the earlier findings are correct and, when they are, how precise. Today, language can be localised better than ever.

Generally, imaging shows that certain functional brain regions are much smaller than estimated in brain lesion studies, and that their boundaries are more distinct (cf. Banich p.294). The exact location varies individually, therefore bringing the results of many brain lesion studies together caused too big estimated functional regions before. For example, stimulating brain tissue electrically (during epilepsy surgery) and observing the outcome (e.g. errors in naming tasks) led to a much better knowledge where language processing areas are located.

Left hemisphere dominance

The first thing to examine was the suspected dominance of the left hemisphere in auditory language processing. (The differences in visual processing are i considered below.) From electric stimulation i studies and PET studies like described above and in chapter 4, there was a lot of evidence for such a dominance: In right-handers, language functions were normally lateralized. The so-called Wada technique tests which

hemisphere is responsible for speech output, A view of the left hemisphere, green: temporal lobe; blue: frontal it is usually used in epilepsy patients during lobe; llow: parietal lobe; red: occipital lobe surgery. It is not a brain imaging technique,

but simulates a brain lesion. One of the hemispheres is anesthetized by injecting a barbiturate (sodium amobarbital) in one of the patient’scarotid arteries. Then he is asked to name a number of items on cards. When he is not able to do that, despite the fact that he could do it an hour earlier, the concerned hemisphere is the one responsible for speech output. This test must be done twice, for there is a chance that the patient produces speech bilaterally. The probability for that is not very high, in fact, according to Rasmussen & Milner 1997a (as referred to in Banich, p.293) it occurs only in 15 % of the left-handers and none of the right-handers. (It is still unclear where these differences in left-handers’ brains come from.)

That means that in most people, only one hemisphere “produces” speech output – and in 96% of right-handers and 70% of left-handers, it is the left one. The findings of the brain lesion studies about assymmetry were confirmed here: Normally (in healthy right-handers), the left hemisphere controls speech output.

Besides left-handers, brain imaging techniques have shown that other examples of bilateral language processing: According to ERP studies (by Bellugi et al. 1994 and Neville et al. 1993 as cited in E. Dabrowska, “Language, Mind an Brain” 2004, p.57), people with the Williams’ syndrome (WS) also have no dominant hemisphere for language. WS patients have a lot of physical and mental disorders, but show, compared to their other (poor) cognitive abilities, very good linguistic skills. And these skills do not rely on one dominant, but both hemispheres contribute equally. So, the majority of the population has a dominant left hemisphere for language processing. But what does it mean that in some individuals’ brains there are other ways to do that? That there are different “organisation possibilities” in our brains? Dabrowska (p.57) suggests that the organisational structure in the brain could be less innate and fixed as it is commonly thought.

Different roles of posterior and anterior regions

In the left hemisphere, both anterior (“face side”) and posterior (“back side” of the head) regions contribute to language processing and are strongly connected. For example, Broca’s area is located more anterior and Wernicke’s area more posterior. It is often generally said that the difference is speech production (Broca) versus language comprehension (Wernicke). PET studies (Fiez & Petersen, 1993, as cited in Banich, p.295) have shown that in fact both anterior and posterior regions were activated in both tasks, but with different strengths - in agreement with the lesion studies. The more active speech production is reqired in experiments, the more frontal is the main activation: For example, when the presented words must be repeated.

Another result (Raichle et al. 1994, as referred to in Banich, p.295) was that the familiarity of the stimuly plays a big role. When the subjects were presented well-known stimuli sets in well-known experimental tasks and had to repeat them, anterior regions were activated. Those regions were known to cause conduction apahsia when damaged. But when the words were new ones, and/or the subjects never had to do a task like this before, the activation was recorded more posterior. That means, when you repeat an unexpected word, the heaviest working brain tissue is about somewhere under your upper left earlap, but when you knew before this word would be the next to repeat, it is a bit nearer to your left eye.

Visual versus Auditory Language Processing

As indicated by the fact that the responsible damages for aphasiae are located in other brain regions than damages causing agraphia, these different types of language comprehension do not take place in the same regions. Reading, no matter if it is words or nonsense syllables, activates an area at the backward border of theMain activation areas:

Hearing and Reading

temporal lobe, that is somewhat backwards of the earlap. Hearing a word or thinking about how a written word sounds takes place at the upper border of the left temporal lobe, under the part of the skull that is located above the ear (Petersen et al. 1988, as cited in Banich, p. 300). That matches with the findings of the lesion studies described above.

When we look at visual language processing, many findings of PET-studies (Banich, p. 301) suggest that the right hemisphere first recognizes a written word as letter sequences, no matter how exactly they look like. Then the language network in the left hemisphere builds up an abstract representation of the word, not dependent on the visual form - e.g. case or whether it is handwriting or bold. In reading, the brain regions for recognizing symbols and the language areas work together hand in hand.

Beyond words

On the word level, the current studies are mostly consistent with each other and with findings from brain lesion studies. But when it comes to the more complex understanding of whole sentences, texts and storylines, the findings are split. According to E. C. Ferstl’s review “The Neuroanatomy of Text Comprehension. What’s the story so far?” (2004), there is evidence for and against right hemisphere regions playing the key role in pragmatics and text comprehension. On the current state of knowledge, we cannot exactly say how and where cognitive functions like building situation models and inferencing ( see next chapter) work together with “pure” language processes.

Researchers hope to find out many more facts about the functional neuroanatomy of language with those neuroimaging methods. Their goal is to verify disprove psychological thories about comprehension. Many questions are to be answered, for it is e.g. still unclear wether there is a distinct language module (that you could cut out without causing anything in other brain functions) or not. As Evely C. Ferstl points out in her review, the next step after exploring distinct small regions responsible for subtasks of language processing will be to find out how they work together and build up the language network.

Findings from other language systems

Most neurolinguistic research is for obvious reasons concerned with production and comprehension of english language, either written or spoken. Currently there is a clear focus on investigating how the meaning of single words is being processed and less on sentence meaning. However looking at different language systems from a neuroscientific perspective can substantiate as well as differentiate acknowledged theories of language processing.

English, as well as all other indo-european languages, is a alphabetic (phonological) system: Each symbol represents one phoneme, either a consonant or a vowel. Other systems include

• logographic systems, where a symbol represents a whole morpheme (such as hieroglyphs and Chinese hanzi)

• syllabic systems, such as Japanese kana

• Abjad, in which only consonants are represented, such as Arabic or Hebrew

Sign language

A very special case among phonological systems are sign languages, which as we will see share many features of 'spoken' phonological language systems from a linguistic perspective but still differ in some ways from a neuroscientific point of view and are hence worth some consideration.

Contrary to popular belief, there is no universal sign language but many regionally bound languages with each a distinct grammar and lexicon. As said before, sign languages are phonological languages in a way that every meaningful sign is made of several phonemes (which used to be called cheremes (Greek χερι: Hand) in sign language until it's cognitive equivalence to phonemes in spoken languages was realized) that carry no meaning as such. A symbol in turn consists of five distinct features:

• Handshape

• Palm orientation

• Place of articulation

• Movement

• Non-manual markers (such as facial expressions)

Sign language grammar

Some neuroscientifically relevant principles of most sign language's grammars (including American Sign Language (ASL), Deutsche Gebärdensprache (DGS) and several other major sign languages) are:

Rich morphosyntactical properties

Elaboration of the manner of actions or the properties of objects that would lead to the introduction of many additional words to the discourse in spoken languages can be expressed within the base word by different movements or non-manual markers, e.g. "The flight was long and I did not enjoy it" is the single sign for "flight" with the past tense marker, moved in a way that represents the attribute "long" combined with the facial expression of disaffection. To that respect sign language is closer to Japanese language than to indo-european ones.

Setting up of nouns

Once signed, nouns can be 'linked' to a point in space, e.g. by pointing beneath one's head. Later in discourse the nouns can be retrieved from there by pointing to that spot again. This is functionally related to pronouns in English.

Chronologically sequenced ordering

Whereas English can order events amongst other structures by e.g. inverted causal dependence ("I was surprised because there was a reasonable amount of money on my bank account although I haven't been paid for weeks"), sign language speakers usually sign events in the order that they occurred (or sometimes in the order in which they found out about them).

Neuropsychological research of sign language

We have seen that sign languages are independent from and fully developed to the extend of spoken languages, we will compare cerebral processes in speaker's of both. Studies from aphasic
patients have shown that damage to anterior portions of the left hemisphere (e.g. Broca's aphasia) result in a loss of fluency and the disability of the patients to correctly use syntactic markers, inflection of verbs and other minor features, although the words signed are chosen semantically correct. In turn, patients with damages to posterior portions could still properly inflect verbs and set up and retrieve nouns from a discourse locus, but their sequences of signs have no meaning (Poizner, Klima & Bellugi, 1987). These findings show that the same underlying processes are responsible for production of sign language and spoken language - or in other words that the brain regions dealing mainly with languge are in no way restricted to spoken language, but may well be able to process any type of language system, as long as it underlies some syntax and semantics (a finding that may serve as evidence to a Universal Grammar as postulated by (Chomsky, 1965)).

Studies using brain imaging techniques such as fMRI proved these results from studies on aphasic subjects by comparing activation patterns of speakers and signers: Typical areas in the left hemisphere are activated for both native Rnglish speakers given written stimuli and native signers given signs as stimuli. However signers also show strong activation of right hemisphere. This is partly due to the necessity to process visually perceived spatial information (such as making the perceived signals indifferent to the angle of perception or individual physical attributes of the signer). However some of those areas, like the angular gyrus, are only activated in native singers - not in hearing subjects that learned ASL after puberty. This suggests that the way of learning sign languages (and languages in general) changes with puberty: Late learner's brains are unable to recruit brain regions specialised for processing this language (Newman et al., 1998).

References & Further Reading

Books - English

• Brigitte Stemmer, Harry A. Whitaker. Handbook of Neurolinguistics. Academic Press (1998). ISBN 0126660557

• Marie T. Banich: Neuropsychology. The neural bases of mental function (1997).

• Ewa Dąbrowska: Language, Mind and Brain. Edinburgh University press Ltd.(2004)

• a review: Evelyn C. Ferstl, The functional neuroanatomy of text comprehension. What's the story so far?" from: Schmalhofer, F. & Perfetti, C. A. (Eds.), Higher Level Language Processes in the Brain:Inference and Comprehension Processes. Lawrence Erlbaum. (2004)

Books - German

• Müller,H.M.& Rickert,G. (Hrsg.): Neurokognition der Sprache. Stauffenberg Verlag (2003)

• Poizner, Klima & Bellugi: What the hands reveal about the brain. MIT Press (1987)

• N. Chomsky: Aspects of the Theory of Syntax. MIT Press (1965). ISBN 0262530074

• Neville & Bavelier: Variability in the effects of experience on the development of cerebral specializations: Insights from the study of deaf individuals. Washington, D.C.: US Government Printing Office (1998)

• Newman et al.: Effects of Age of Acquisition on Cortical Organization for American Sign Language: an fMRI Study. NeuroImage, 7(4), part 2 (1998)

Links - English

• Robert A. Mason and Marcel Adam Just: How the Brain Processes Causal Inferences in


• Neal J. Pearlmutter and Aurora Alma Mendelsohn: Serial versus Parallel Sentence Comprehension

• Brain Processes of Relating a Statement to a Previously Read Text: Memory Resonance and Situational Constructions

• Clahsen, Harald: Lexical Entries and Rules of Language: A Multidisciplinary Study of German Inflection.

• Cherney, Leora (2001): Right Hemisphere Brain Damage

• Grodzinsky, Yosef (2000): The neurology of syntax: Language use without Broca's area.

• Müller, H.M. & Kutas, M. (1996). What's in a name? Electrophysiological differences between spoken nouns, proper names and one's own name.NeuroReport 8:221-225.

• Müller, H. M., King, J. W. & Kutas, M. (1997). Event-related potentials elicited by spoken relative clausesCognitive Brain Research 4:193-203.

Links - German

• University of Bielefeld:

• Müller, H. M., Weiss, S. & Rickheit, G. (1997). Experimentelle Neurolinguistik: Der Zusammenhang von Sprache und GehirnIn: Bielefelder Linguistik (Hrsg.) Aisthesis-Verlag, pp. 125-128.

• Müller, H.M. & Kutas, M. (1997). Die Verarbeitung von Eigennamen und Gattungsbezeichnungen: Eine elektrophysiologische Studie. In: G. Rickheit (Hrsg.). Studien zur Klinischen Linguistik - Methoden, Modelle, Intervention. Opladen: Westdeutscher Verlag, pp. 147-169.

• Müller, H.M., King, J.W. & Kutas, M. (1998). Elektrophysiologische Analyse der Verarbeitung natürlichsprachlicher Sätze mit unterschiedlicher Belastung des Arbeitsgedächtnisses. Klinische Neurophysiologie 29: 321-330.

• Michael Schecker (1998): Neuronale "Kodierung" zentraler Sprachverarbeitungsprozesse --> Debates (only a criticism)


Syndicate content