Psycholinguistics or psychology of language is the study of the psychological and neurobiological factors that enable humans to acquire, use, and understand language. Initial forays into psycholinguistics were largely philosophical ventures, due mainly to a lack of cohesive data on how the human brain functioned. Modern research makes use of biology, neuroscience, cognitive science, and information theory to study how the brain processes language. There are a number of subdisciplines; for example, as non-invasive techniques for studying the neurological workings of the brain become more and more widespread, neurolinguistics has become a field in its own right.
Psycholinguistics covers the cognitive processes that make it possible to generate a grammatical and meaningful sentence out of vocabulary and grammatical structures, as well as the processes that make it possible to understand utterances, words, text, etc. Developmental psycholinguistics studies infants' and children's ability to learn language, usually with experimental or at least quantitative methods (as opposed to naturalistic observations such as those made by Jean Piaget in his research on the development of children).
Psycholinguistics is interdisciplinary in nature and is studied by people in a variety of fields, such as psychology, cognitive science, and linguistics. There are several subdivisions within psycholinguistics that are based on the components that make up human language.
Theories about how language works in the human mind attempt to account for, among other things, how we associate meaning with the sounds (or signs) of language and how we use syntax—that is, how we manage to put words in the proper order to produce and understand the strings of words we call "sentences." The first of these items—associating sound with meaning—is the least controversial and is generally held to be an area in which animal and human communication have at least some things in common (See animal communication). Syntax, on the other hand, is controversial, and is the focus of the discussion that follows.
There are essentially two schools of thought as to how we manage to create syntactic sentences: (1) syntax is an evolutionary product of increased human intelligence over time and social factors that encouraged the development of spoken language; (2) language exists because humans possess an innate ability, an access to what has been called a "universal grammar." This view holds that the human ability for syntax is "hard-wired" in the brain. This view claims, for example, that complex syntactic features such as recursion are beyond even the potential abilities of the most intelligent and social non-humans. (Recursion, for example, includes the use of relative pronouns to refer back to earlier parts of a sentence—"The girl whose car is blocking my view of the tree that I planted last year is my friend .") The ability to use syntax like that would not exist without an innate concept that contains the underpinnings for the grammatical rules that produce recursion, says the "innate" view. Children acquiring a language, thus, have a vast search space to explore among possible human grammars, settling, logically, on the language(s) spoken or signed in their own community of speakers. Such syntax is, according to the second point of view, what defines human language and makes it different from even the most sophisticated forms of animal communication.
The first view was prevalent until about 1960 and is well represented by the mentalistic theories of Jean Piaget and the empiricist, Rudolf Carnap. As well, the school of psychology known as behaviorism (see Verbal Behavior (1957) by B.F. Skinner) puts forth the point of view that language—syntax included—is behavior shaped by conditioned response. The second point of view—the "innate" one—can fairly be said to have begun with Noam Chomsky's highly critical review of Skinner's book in 1959 in the pages of the journal Language . That review started what has been termed "the cognitive revolution" in psychology.
The field of psycholinguistics since then has been defined by reactions to Chomsky, pro and con. The pro view still holds that the human ability to use syntax is qualitatively different from any sort of animal communication. That ability might have resulted from a favorable mutation (extremely unlikely) or (more likely) from an adaptation of skills evolved for other purposes. That is, precise syntax might, indeed, serve group needs; better linguistic expression might produce more cohesion, cooperation, and potential for survival, BUT precise syntax can only have developed from rudimentary—or no—syntax, which would have had no survival value and, thus, would not have evolved at all. Thus, one looks for other skills, the characteristics of which might have later been useful for syntax. In the terminology of modern evolutionary biology, these skills would be said to be "pre-adapted" for syntax. (Also see "exaptation".) Just what those skills might have been is the focus of recent research—or, at least, speculation.
The con view still holds that language—including syntax—is an outgrowth of hundreds of thousands of years of increasing intelligence and tens of thousands of years of human interaction. From that view, syntax in language gradually increased group cohesion and potential for survival. Language—syntax and all—is a cultural artifact. This view challenges the "innate" view as scientifically unfalsifiable; that is to say, it can't be tested; the fact that a particular, conceivable syntactic structure does not exist in any of the world's finite repertoire of languages is an interesting observation, but it is not proof of a genetic constraint on possible forms, nor does it prove that such forms couldn't exist or couldn't be learned.
Contemporary theorists, besides Chomsky, working in the field of theories of psycholinguistics include George Lakoff, Steven Pinker, and Michael Tomasello.
Much methodology in psycholinguistics takes the form of behavioral experiments. In these types of studies, subjects are presented with some form of linguistic input and asked to perform a task (e.g. make a judgement, reproduce the stimulus, read a visually presented word aloud). Reaction times (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance.
Such tasks might include, for example, asking the subject to convert nouns into verbs; e.g., "book" suggests "to write," "water" suggests "to drink," and so on. Another experiment might present an active sentence such as "Bob threw the ball to Bill" and a passive equivalent, "The ball was thrown to Bill by Bob" and then ask the question, "Who threw the ball?" We might then conclude (as is the case) that active sentences are processed more easily (faster) than passive sentences. More interestingly, we might also find out (as is the case) that some people are unable to understand passive sentences; we might then make some tentative steps towards understanding certain types of language deficits (generally grouped under the broad term, aphasia).
Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to discover how language works in the brain. For example, severing the corpus callosum (the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain surgery necessary, language researchers had an opportunity to pursue their research.
Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic resonance imaging ( fMRI); event related potentials (ERP) and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each type of methodology presents a set of advantages and disadvantages for studying a particular problem in psycholinguistics.
Computational modelling is another methodology. It refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they motivate theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex that they render discursive analysis unreliable (e.g. the DRC model of reading and word recognition proposed by Coltheart and colleagues.
More recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978)  the importance and informativity of eye-movements during reading was established. Tanenhaus et al., have performed a number of visual-world eye-tracking studies to study the cognitive processes related to spoken language. Since eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is presented with linguistic input.
There are a number of unanswered questions in psycholinguistics. In part, they are suggested by some of the items mentioned in the section on "theories" (above). For example, is the human ability to use syntax based on innate mental structures or is syntactic speech the function of intelligence and interaction with other humans? Can we even design psycholinguistic experiments to find that out? Research in animal communication has much to offer here. Can some animals be taught the syntax of human language? If so, what does that mean? If not, what does that mean?
How are infants able to learn language? Almost all healthy human infants acquire language readily in the first few years of life. This is true across cultures and societies. And what about children who do not learn language properly? There is a broad field called aphasia that deals with language deficits. Can research in psycholinguistics ever be of some therapeutic value? In addition, it is much more difficult for adults to acquire second languages than it is for infants to learn their first language (bilingual infants are able to learn both of their native languages easily). Thus, critical periods may exist during which language is able to be learned readily. A great deal of research in psycholinguistics focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one knows, the easier it is to learn more.
Also, recent research using new non-invasive imaging techniques seeks to shed light on just where language is located in the brain. How localized is language? How distributed is it from one hemisphere to the other? The older, traditional descriptions of the language functions of Broca's area, Wernicke's area and other areas of the brain will be refined as research continues.
Another unsolved problem in the field is how to create computer programs that can understand language as well as humans. Although this question certainly has a philosophical side to it, it is closely related to computational linguistics and artificial intelligence and has many potential practical applications.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed.