Essentially, Richard & others are replaying the same old problems of computational explosions - see "computational complexity" in this history of cog. sci. review - no?

Mechanical Mind
Gilbert Harman
Mind as Machine: A History of Cognitive Science. Margaret A. Boden. Two volumes, xlviii + 1631 pp. Oxford University Press, 2006. $225.

The term cognitive science, which gained currency in the last half of the 20th century, is used to refer to the study of cognition-cognitive structures and processes in the mind or brain, mostly in people rather than, say, rats or insects. Cognitive science in this sense has reflected a growing rejection of behaviorism in favor of the study of mind and "human information processing." The field includes the study of thinking, perception, emotion, creativity, language, consciousness and learning. Sometimes it has involved writing (or at least thinking about) computer programs that attempt to model mental processes or that provide tools such as spreadsheets, theorem provers, mathematical-equation solvers and engines for searching the Web. The programs might involve rules of inference or "productions," "mental models," connectionist "neural" networks or other sorts of parallel "constraint satisfaction" approaches. Cognitive science so understood includes cognitive neuroscience, artificial intelligence (AI), robotics and artificial life; conceptual, linguistic and moral development; and learning in humans, other animals and machines.

click for full image and caption


Among those sometimes identifying themselves as cognitive scientists are philosophers, computer scientists, psychologists, linguists, engineers, biologists, medical researchers and mathematicians. Some individual contributors to the field have had expertise in several of these more traditional disciplines. An excellent example is the philosopher, psychologist and computer scientist Margaret Boden, who founded the School of Cognitive and Computing Sciences at the University of Sussex and is the author of a number of books, including Artificial Intelligence and Natural Man (1977) and The Creative Mind (1990). Boden has been active in cognitive science pretty much from the start and has known many of the other central participants.

In her latest book, the lively and interesting Mind as Machine: A History of Cognitive Science, the relevant machine is usually a computer, and the cognitive science is usually concerned with the sort of cognition that can be exhibited by a computer. Boden does not discuss other aspects of the subject, broadly conceived, such as the "principles and parameters" approach in contemporary linguistics or the psychology of heuristics and biases. Furthermore, she also puts to one side such mainstream developments in computer science as data mining and statistical learning theory. In the preface she characterizes the book as an essay expressing her view of cognitive science as a whole, a "thumbnail sketch" meant to be "read entire" rather than "dipped into."

It is fortunate that Mind as Machine is highly readable, particularly because it contains 1,452 pages of text, divided into two very large volumes. Because the references and indices (which fill an additional 179 pages) are at the end of the second volume, readers will need to have it on hand as they make their way through the first. Given that together these tomes weigh more than 7 pounds, this is not light reading!

Boden's goal, she says, is to show how cognitive scientists have tried to find computational or informational answers to frequently asked questions about the mind-"what it is, what it does, how it works, how it evolved, and how it's even possible." How do our brains generate consciousness? Are animals or newborn babies conscious? Can machines be conscious? If not, why not? How is free will possible, or creativity? How are the brain and mind different? What counts as a language?

The first five chapters present the historical background of the field, delving into such topics as cybernetics and feedback, and discussing important figures such as René Descartes, Immanuel Kant, Charles Babbage, Alan Turing and John von Neumann, as well as Warren McCulloch and Walter Pitts, who in 1943 cowrote a paper on propositional calculus, Turing machines and neuronal synapses. Boden also goes into some detail about the situation in psychology and biology during the transition from behaviorism to cognitive science, which she characterizes as a revolution. The metaphor she employs is that of cognitive scientists entering the "house of Psychology," whose lodgers at the time included behaviorists, Freudians, Gestalt psychologists, Piagetians, ethologists and personality theorists.

Chapter 6 introduces the founding personalities of cognitive science from the 1950s. George A. Miller, the first information-theoretic psychologist, wrote the widely cited paper "The Magical Number Seven, Plus or Minus Two," in which he reported that, as a channel for processing information, the human mind is limited to about seven items at any given time; more information than that can be taken in only if items are grouped as "chunks." Jerome Bruner introduced a "New Look" in perception, taking it to be proactive rather than reactive. In A Study of Thinking (1956), Bruner and coauthors Jacqueline Goodnow and George Austin looked at the strategies people use to learn new concepts. Richard Gregory argued that even systems of artificial vision would be subject to visual illusions. Herbert Simon and Allen Newell developed a computer program for proving logic theorems. And Noam Chomsky provided a (very) partial generative grammar of English in Syntactic Structures (1957).

Two important meetings occurred in 1956, one lasting two months at Dartmouth and a shorter one at MIT. There was also a third meeting in 1958 in London. Soon after that, Miller, Eugene Galanter and Karl Pribram published an influential book, Plans and the Structure of Behavior (1960), and Bruner and Miller started a Center for Cognitive Studies at Harvard. These events were followed by anthologies, textbooks and journals. "Cognitive science was truly on its way."

In the remainder of Boden's treatment, individual chapters offer chronological accounts of particular aspects of the larger subject. So, chapter 7 offers an extensive discussion of computational psychology as it has evolved since 1960 in personality psychology, including emotion; in the psychology of language; in how psychologists conceive of psychological explanation; in the psychology of reasoning; in the psychology of vision; and in attitudes toward nativism. The chapter then ends with an overview of the field of computational psychology as a whole. Boden acknowledges that "we're still a very long way from a plausible understanding of the mind's architecture, never mind computer models of it," but she believes that the advent of models of artificial intelligence has been extraordinarily important for the development of psychology.

Chapter 8 discusses the very minor role of anthropology as the "missing," or "unacknowledged," discipline of cognitive science. Here Boden touches on the work of the relatively few anthropologists who do fit into cognitive science.

Chapter 9, the last in volume 1, describes Noam Chomsky's early impact on cognitive science, discussing his famous review of B. F. Skinner's book Verbal Behavior, his characterization of a hierarchy of formal grammars, his development of transformational generative grammar and his defense of nativism and universal grammar. Boden notes that psychologists, including Miller, lost interest in transformational grammar after realizing that the relevant transformations were ways of characterizing linguistic structure and not psychological operations.

As Boden mentions, many people, including me, raised objections in the 1960s to Chomsky's so-called nativism-his view that certain principles of language are innate to a language faculty. She seems unaware that Chomsky's reasons for this view became clearer as time went on and formed the basis for the current, standard principles-and-parameters view, which explains otherwise obscure patterns of differences between languages.

Perhaps the heart of Boden's story is her account of the development of artificial intelligence, broadly construed. There were two sorts of artificial intelligence at the beginning: One treated beliefs and goals using explicit languagelike "propositional" representations, whereas the other-the connectionist approach-took beliefs and goals to be implicitly represented in the distribution of excitation or connection strengths in a neural network.

The proposition-based approach, outlined in chapter 10, initially developed programs for proving theorems and playing board games. These were followed by studies of planning, puzzle problem solving, and expert systems designed to provide medical or other advice. Special programming languages were devised, including LISP, PROLOG, PLANNER and CONNIVER. Systems were developed for default reasoning: For instance, given that something is a bird, assume it flies (in the absence of some reason to think it does not fly); given that it is a penguin, assume it does not fly (in the absence of some reason to think it does fly).

There were difficulties. One was "computational complexity"-almost all methods that worked in small "toy" domains did not work for more realistic cases, because of exponential explosions: Operating in even slightly more complex domains took much longer and used many more resources. Another issue was whether "frame" assumptions (such as that chess pieces remain in the same position until captured or moved) should be built into the architecture of the problem or should be stated explicitly. This became a pressing issue in thinking about general commonsense reasoning: Is it even possible to explicitly formulate all relevant frame assumptions?

On the other side was the connectionist neural-net approach, considered in chapter 12, which seeks to model such psychological capacities as perception, memory, creativity, language and learning, using interconnected networks of simple units. Connectionism was ­especially concerned with rapidly recognizing and classifying items given their observed characteristics, without having to go through a long, complicated chain of reasoning.

In the simplest case of a single artificial perceptron, several real-number inputs represent the values of selected aspects of the observed scene, and an output value (the activation of the perceptron in question), possibly 1 or 0, indicates yes or no. The perceptron takes a weighted sum of the input values and outputs 1, or yes, if the sum is greater than some threshold value; if not, the output is 0. Perceptrons can be arranged in feed-forward networks, so that the output of the first layer goes to perceptrons in the second layer, whose outputs are inputs to a third layer, and so on until a decision is made by a final threshold unit. Given appropriate weights and enough units, a three-layer network can approximate almost any desired way of classifying inputs. Relevant weights do not need to be determined ahead of time by the programmer. Instead, the network can be "trained" to give desired outputs, by making small corrections when the network's response is incorrect.

There are other kinds of connectionist networks. For example, in certain sorts of recurrent networks, the activations of the units settle into a more or less steady state.

Boden describes these developments in loving detail, along with bitter disputes between proponents of proposition-based research and those who favored the connectionist approach. The disagreements were fueled by abrupt changes in U.S. government funding, which are noted in chapter 11. Much of the government money available was provided in the expectation that artificial intelligence would prove to be militarily useful. In the 1980s, funders decided to switch their support from proposition-based artificial intelligence to connectionism. They did so both because of perceived stagnation in the proposition-based approach (mainly due to the difficulties mentioned above), and because connectionism became more attractive with the discovery (or rediscovery) of back-propagation algorithms for training multilayer networks.

More recent developments are described in chapter 13. These include virtual-reality systems, attempts to construct societies of artificial agents that interact socially, and CYC-a project aimed at explicitly representing enough of the commonsense background to enable an artificial system to learn more by reading dictionaries, textbooks, encyclopedias and newspapers. Chapter 14 is a rich account of computational and cognitive neuroscience. Topics touched on include challenges to the computational approach, theories of consciousness and philosophy of mind. In chapter 15, Boden describes the origins of artificial life and then discusses reaction-diffusion equations, self-replicating automata, evolutionary networks, computational neuro-ethology (computational interpretation of the neural mechanisms that underlie the behavior of an animal in its habitat) and work on complex systems. Chapter 16 reviews philosophical thinking about mind as machine. Is there a mind-body problem? If a robot simulation of a person were developed, would it be conscious? Would it suffer from a mind-body problem? Would it be alive? A very brief final chapter lists promising areas for further research.

This is, as far as I know, the first full-scale history of cognitive science. I am sure that knowledgeable readers may have various quibbles about one or another aspect of this history (like my own objection above to the discussion of Chomsky's work in linguistics). But I doubt that many, or in fact any, readers will have the detailed firsthand knowledge that Boden has of so much of cognitive science. Future histories of the subject will have to build on this one.

Reviewer Information
Gilbert Harman is Stuart Professor of Philosophy at Princeton University, where in the past he was chair of the Program in Cognitive Studies and codirector of the Cognitive Science Laboratory. He is coauthor with Sanjeev Kulkarni of Reliable Reasoning: Induction and Statistical Learning Theory (The MIT Press, 2007).

Source: American Scientist
http://www.americanscientist.org/BookReviewTypeDetail/assetid/56418

Posted by
Robert Karl Stonjek

Back to top Reply to sender | Reply to group | Reply via web post
Messages in this topic (1)
9. Book Review: Young Minds in Social Worlds - Experience, Meaning, and
Posted by: "Robert Karl Stonjek" [EMAIL PROTECTED]   r_karl_s
Tue Dec 11, 2007 3:03 am (PST)
Constructing Cognition
Ethan Remmel
Young Minds in Social Worlds: Experience, Meaning, and Memory. Katherine Nelson. xiv + 315 pp. Harvard University Press, 2007. $49.95.

The two patron saints of the study of cognitive development (which involves how thinking and knowledge change with age) are the Swiss psychologist Jean Piaget and the Russian psychologist Lev Vygotsky. Both were born in 1896, but Piaget lived and wrote into his 80s, whereas Vygotsky died of tuberculosis at age 37. Both advocated forms of constructivism, the theory that children actively construct knowledge, rather than being passively molded by experience (as in behaviorism) or programmed by biology (as in nativism). However, Piaget viewed cognitive development as a product of the individual mind, achieved through observation and experimentation, whereas Vygotsky viewed it as a social process, achieved through interaction with more knowledgeable members of the culture.

Developmental psychologist Katherine Nelson is an apostle of the Vygotskian approach, known as social constructivism. She is opposed to the currently popular descendant of the Piagetian approach known as theory theory, which holds that children construct causal conceptualizations of different domains of knowledge ("folk theories") using the same cognitive processes that scientists use to construct scientific theories (that is, children think like little scientists).

Nelson contends that theory theorists such as Alison Gopnik commit the psychologist's fallacy-attributing one's own thought processes to others-and thereby forsake one of constructivism's central insights: that children's cognition is qualitatively different from that of adults, not simply quantitatively inferior. In other words, children don't just know less, they think differently.

In her new book, Young Minds in Social Worlds, Nelson also argues that computational theories of mind (based on the premise that the mind operates like a computer) are inadequate, because the ultimate function of human cognition is not to process abstract information, but to interpret experience and share cultural meanings with others. She criticizes evolutionary cognitive psychologists such as Steven Pinker and nativist developmental psychologists such as Elizabeth Spelke for allegedly overemphasizing genetic influences on development and underemphasizing the importance of social and cultural influences. And Nelson draws on developmental systems theory to describe development as a complex, interactive, dynamic process in which cognitive structure gradually emerges, rather than being built into the genes or copied from the environment.

The bulk of the book reviews cognitive and language development during the first five years of life, seen from Nelson's theoretical perspective. Her target audience appears to be other developmental psychologists, as she refers to many theories and studies without giving much explanation for readers not already familiar with them. Developmentalists, however, will not learn much factual information because she presents no new research. Instead, what they may gain is an appreciation of Nelson's theoretical perspective on development, which she calls experiential (to emphasize the importance of the child's individual experience).

Nelson suggests that a child's consciousness develops in levels that correspond to the stages proposed by cognitive neuroscientist Merlin Donald for the evolution of the human mind. Nelson presented this model in more detail in Language in Cognitive Development (1996), so those who have read that book will find it familiar.

In Young Minds in Social Worlds, the early chapters on cognitive development in the infant and toddler ages are uncontroversial and, frankly, unmemorable. The middle chapters on the acquisition of language and symbols are much more interesting, perhaps because here Nelson can draw more on her own research. She is critical of current theories about word learning that assume that children have the same conceptual structures as adults and simply need to map the words that they hear onto the concepts that they already have. Nelson argues convincingly that children gradually construct concepts through linguistic interaction. In other words, children do not simply match labels to pre-existing categories, but rather use linguistic cues to bring their conceptual boundaries in line with those shared by their linguistic community.

A corollary of this position is that production may sometimes precede comprehension (a reversal of the usually assumed sequence): Children may imitate the use of a word in a particular context and only gradually acquire a context-independent understanding of the meaning of the word. In Nelson's view, which is indebted to the psycholinguist Dan Slobin, language is less than a necessary ingredient for thought itself (that is, nonlinguistic thought is possible), but more than a neutral vehicle for the expression of thought. Language is needed for sharing thoughts, and those thoughts are inevitably shaped by the particular language used.

Nelson raises the interesting question of why children seem to understand symbolic representations (for example, words, which do not resemble their referents-onomatopoeia excepted) before they understand iconic representations (for example, scale models, which do resemble their referents), even though the former relation, being arbitrary, would seem harder to comprehend. Although elsewhere she criticizes nativist explanations ("it's built in"), here she suggests that perhaps humans have a specific evolved capacity to understand symbols but must learn to interpret icons. Perhaps, but I think it's more likely that icons are challenging because, as Judy DeLoache has observed, they have a dual nature, being both representations and objects in their own right-for example, a scale model can be interpreted as a toy. This conflict creates greater demands on executive functions than words do, because words have no meaning except as representations.

Nelson stumbles a few times in the later chapters on cognitive development in preschoolers. She writes that "young children tend to remember the gist of a complex situation and lose track of details." But in fact, research based on Charles Brainerd and Valerie Reyna's fuzzy-trace theory finds exactly the opposite: Children rely more on memory for verbatim details, and adults rely more on memory for gist. For example, when asked to remember a list of words that cluster around a theme, adults are more likely to erroneously include a word that is semantically related to the words supplied, whereas children are more likely to incorrectly include a word that rhymes with a word actually presented.

Nelson criticizes research on children's theory of mind (their understanding of mental states) for overemphasizing one particular development during the preschool years: understanding of beliefs (that people can believe different things, including things that are false). But Nelson then does the same thing herself-saying, for example, that "competence in representational language is essential for entering the community of minds (and for solving theory of mind problems)." Nelson means that the maturation of representational language during the preschool years enables children to understand beliefs. However, she neglects the fact that children solve some other theory-of-mind problems much earlier. For example, 18-month-olds understand desires (that people want different things).

Nelson also argues that children's understanding of mind is not based on some sort of underlying theory, because "nothing guarantees that one person can correctly interpret another." But her argument misses the mark, because nobody claims that theories are guaranteed to produce correct interpretations. If theories did, science would be a lot easier! And despite her antipathy toward theory theory, she writes that "children's questions indicate that they are mentally instigating investigations of the causal structure of aspects of the world, both psychological and physical." That sure sounds to me like they're theorizing.

Nelson concludes by maintaining that cognitive development is not a process of individual internalizing of objective knowledge, but rather is a collaborative interpretation of subjective experience. However, she eschews the solipsism of postmodern relativism. She sometimes presents the obvious as profound, in such statements as "this view reflects the position that cognitive processes are an integral part of memory itself" and "this view resolves the issue of why thinking in a different language is different from thinking in a first language; it is different because the languages are different." She aims for a pragmatic and balanced account of development, which unfortunately sometimes feels vague and bland. She may be right, but her book is less fun to read than those of more provocative writers such as Steven Pinker. For a more engaging exposition of a similar sociocultural perspective, I recommend two books by Michael Tomasello: The Cultural Origins of Human Cognition (1999) and Constructing a Language: A Usage-Based Theory of Language Acquisition (2003).

Reviewer Information
Ethan Remmel is a cognitive developmental psychologist at Western Washington University in Bellingham. His research focus is the relationship between language experience and children's understanding of the mind.

Source: American Scientist
http://www.americanscientist.org/BookReviewTypeDetail/assetid/56401

Posted by
Robert Karl Stonjek

Back to top Reply to sender | Reply to group | Reply via web post
Messages in this topic (1)
10. Book review: Language, Consciousness, Culture - Essays on Mental Str
Posted by: "Robert Karl Stonjek" [EMAIL PROTECTED]   r_karl_s
Tue Dec 11, 2007 3:04 am (PST)
The Functionalist's Dilemma
George Lakoff
Language, Consciousness, Culture: Essays on Mental Structure. Ray Jackendoff. xxvi + 403 pp. The MIT Press, 2007. $36.

Science, as Thomas Kuhn famously observed, does not progress linearly. Old paradigms remain as new ones begin to supplant them. And science is very much a product of the times.

The symbol-manipulation paradigm for the mind spread like wildfire in the late 1950s. Formal logic in the tradition of Bertrand Russell dominated Anglo-American philosophy, with W. V. O. Quine as the dominant figure in America. Formalism reigned in mathematics, fueled by the Bourbaki tradition in France. Great excitement was generated by the Church-Turing thesis that Turing machines, formal logic, recursive functions and Emil Post's formal languages are equivalent. The question naturally arose: Could thought be characterized as a symbol-manipulation system?

The idea of artificial intelligence developed out of an attempt to answer that question, as did the information-processing approach to cognitive psychology of the 1960s. The mind was seen as computer software, with the brain as hardware. The software was what mattered. Any hardware would do: a digital computer or the brain, which was called wetware and seen (incorrectly) as a general-purpose processor. The corresponding philosophy of mind, called functionalism, claimed that you could adequately study the mind independently of the brain by focusing on the mind's functions as carried out by the manipulation of abstract symbols.

The time was ripe for Noam Chomsky to adapt the symbol-manipulation paradigm to linguistics. Chomsky's metaphor was simple: A sentence was a string of symbols. A language was a set of such strings. A grammar was a set of recursive procedures for generating such sets. Language was syntacticized-placed mathematically within a Post system, with abstract symbols manipulated in algorithmic fashion by precise formal rules. Because the rules could not look outside the system, language had to be "autonomous"-independent of the rest of the mind. Meaning and communication could play no role in the structure of language. The brain was irrelevant. This approach was called generative linguistics, and it continues to have adherents in many linguistics departments in the United States.

In the mid-1970s, another paradigm shift occurred. Neuroscience burst onto the intellectual stage. Cognitive science expanded beyond formalist cognitive psychology to include neural models. And cognitive linguistics emerged, whose proponents (including me) see language and thought not as an abstract symbol-manipulation system but as physically embodied and reflecting both the specific properties and the limitations of our brains and bodies. Cognitive linguistics has been steadily developing into a rigorously formulated neural theory of language based on neural-computation theory and actual developments in neuroscience.

Ray Jackendoff's new book, Language, Consciousness, Culture, is set solidly within the old generative-linguistics paradigm. In it, Jackendoff staunchly defends functionalism and the symbol-manipulation paradigm. "Some neuroscientists say we are beyond this stage of inquiry, that we don't need to talk about 'symbols in the head' anymore. I firmly disagree," he notes. He goes on to argue that the symbolic representations given by linguists are simply right, and he takes the brain to be irrelevant. Interestingly, he does not cite the major work arguing the opposite, Jerome Feldman's 2006 book, From Molecule to Metaphor. Feldman shows how the analyses of language and thought done by cognitive linguists can be characterized in terms of neural computation. But, as Jackendoff says, "Cognitive Grammarians . . . have been steadfastly ignored by mainstream generative linguistics." Just as Kuhn would have predicted.

All this creates a dilemma for Jackendoff. He sees the limitations of the functionalist paradigm and rails correctly against Chomsky's syntacticization of meaning, but he stays with a version of symbolic logic, in which meaning is also syntacticized by a formal logical syntax.

Jackendoff has read widely in cognitive science and neuroscience, while "steadfastly ignoring" the literature of cognitive and neural theories of language, which answers many of the questions he raises, although in a paradigm he refuses to consider. He sees correctly that the cognitive and brain sciences ought to be taken seriously by philosophers and social scientists, but his forays into social, moral and political ideas are limited by his functionalist approach.

Take the question of meaning. In 1963, I proposed a theory of generative semantics in which a version of formal logic became an input to generative grammars. I was later joined in this enterprise by James D. McCawley and John Robert Ross, two of Chomsky's best-known students. Among our tenets were that conceptual structure is generative, that it is prior to and independent of language, and that it is inaccessible to consciousness. Jackendoff argued strongly against this position at the time, but in this book, only 40 years later, he accepts these tenets, while keeping Chomsky's idea that syntactic structure is independent of meaning. Jackendoff adopts a parallel-structure theory in which he holds both ideas at once. As we did then, he now declares that Chomsky's syntactocentrism is a "scientific mistake." Yet, as a Chomskyan syntactician, he has to keep a version of the "scientific mistake"-an autonomous syntax for grammar alongside his autonomous syntax for meaning (a kind of symbolic logic).

In the 1960s, Charles J. Fillmore proposed a theory of "case grammar" in which there were semantic roles (agent, patient, experiencer and so on) and principles mapping these roles to grammar. This idea was accepted in cognitive linguistics and has been developed over the past 40 years by Fillmore and many others in the theory of grammatical constructions, in which semantics is directly paired with syntactic form. Jackendoff adopts a version of this theory without mentioning Fillmore. Laudable, if a little late.

In 1975, Fillmore began the development of "frame semantics," expanding the notion in great detail over the next three decades. Conceptual framing has become central in cognitive linguistics worldwide and is widely applied, as in my work on political analysis over the past decade. Jackendoff accepts a much less precise and less worked-out version of frames set forth by Erving Goffman and Marvin Minsky in the mid-1970s, but he "steadfastly ignores" Fillmore's elaborate research and its widespread application.

In 1997, Srini Narayanan, in his dissertation at the University of California, Berkeley, worked out a neural computational account of actions and events, which generalizes to the semantics of aspect (event structure) in linguistics and actually computes the logic of aspect. In Language, Consciousness, Culture, Jackendoff tries to adapt Chomsky's syntactic structures to action structure, which Patricia Greenfield of UCLA first attempted in the 1960s. Jackendoff's account, coming a decade after Narayanan's, doesn't characterize actions nearly as well, does not compute the logic of actions, does not characterize the semantics of aspect and does not fit the theory of neural computation. But it is gratifying to see Jackendoff trying to link motor actions to linguistics (as Chomsky never would), in an attempt to break out of the functionalist mold without leaving it.

Jackendoff is asking questions well beyond the Chomskyan enterprise, and in some cases he approaches what cognitive linguists have achieved. But one place he gets it very wrong is conceptual metaphor.

Mark Johnson and I wrote Metaphors We Live By (1980) almost three decades ago. Since then hundreds of researchers have developed a whole field of study around the subject. In our 1999 book Philosophy in the Flesh, Johnson and I elaborated in great detail on Narayanan's neural computational theory of metaphor.

In the neural theory, conceptual metaphor arises in childhood when experiences regularly occur together, activating different brain regions. Activation repeatedly spreads along neural pathways, progressively strengthening synapses in pathways between those brain regions until new circuitry is formed linking them. The new circuitry physically constitutes the metaphor, carrying out a neural mapping between frame circuitry in the regions and permitting new inferences. The conceptual metaphor MORE IS UP (as in "prices rose," "the temperature fell") is learned because brain regions for quantity and verticality are both activated whenever you pour liquid into a glass or build any pile. AFFECTION IS WARMTH (as in "She's a warm person," or "She's an ice queen") because when you are held affectionately as a child by your parents, you feel physical warmth. Hundreds of such primary metaphors are learned early in life. Complex metaphors are formed by neural bindings of these primary metaphors. And metaphorical language expresses both primary and complex metaphors.

Because we first experience governance within the family, one widespread primary metaphor is A GOVERNING INSTITUTION IS A FAMILY, with authority based on parental authority. Within the literal Family Frame, rights and obligations arise from what is allowed and required, given the desires and responsibilities of parents and children. Children want to be fed and taken care of, and parents are required to provide for them. Children have other desires that may be allowed or forbidden. Parents may require certain things of children.

Under the metaphor A GOVERNING INSTITUTION IS A FAMILY, what is required by an authority is called an obligation, and what is allowed or has to be provided by an authority is called a right. The metaphor applies at various levels, so there are higher governing institutions, such as societies, nature or the universe, and metaphorical authorities, such as social or moral norms, natural laws and God. At each level, the logic of family-based authority is metaphorically duplicated for rights and obligations, with authorities at a lower level subjected to authority at a higher level. No special metaphors unique to rights and obligations are needed. Other independently existing primary metaphors flesh out the complexities: Because ACHIEVING A DESIRED PURPOSE IS GETTING A DESIRED OBJECT, rights are seen as metaphorical possessions, which can be given to you, held onto or lost. Because requirements can be difficult and DIFFICULTIES ARE BURDENS, we speak of "taking on" or "undertaking" obligations.

You would never know any of this from reading Jackendoff's brief discussion of whether rights and obligations are understood metaphorically. He reaches the conclusion he has to reach: that no conceptual metaphor at all is used in understanding rights and obligations. This is not surprising, because typically he has largely ignored the cognitive linguistics literature.

Had the discussion of rights and obligations in Language, Consciousness, Culture appeared in the late 1960s, it would have been seen as excellent. But coming out nearly 40 years later, it is inadequate, because it fails to explain why we reason about rights and obligations as we do, both in the West and elsewhere in the world. The neural-metaphorical understanding gives a correct account of the data plus an explanation grounded in biology. Such explanations are lacking throughout the book because Jackendoff still holds to functionalism.

For a cognitive linguist like myself, reading Jackendoff's book is both painful and hopeful-painful because he keeps trying to do interesting and important intellectual work while being stuck in a paradigm that won't allow it, and hopeful because he may help the transition from a brain-ignoring symbol-manipulation paradigm to a brain-based neural theory of thought and language. I wish that other linguists, both generative and cognitive, had his scope and intellectual ambition.

Reviewer Information
George Lakoff is Richard and Rhoda Goldman Distinguished Professor of Cognitive Science and Linguistics at the University of California, Berkeley, and Senior Fellow at the Rockridge Institute. His recent books include Moral Politics (University of Chicago Press, 1996 and 2002), Don't Think of an Elephant! (Chelsea Green Publishing, 2004), Whose Freedom? (Picador, 2006), and Thinking Points (Farrar, Straus & Giroux, 2006), with the Rockridge Institute. The Political Mind will appear from Viking/Penguin in 2008.

Source: American Scientist
http://www.americanscientist.org/BookReviewTypeDetail/assetid/56419

Posted by
Robert Karl Stonjek

Back to top Reply to sender | Reply to group | Reply via web post
Messages in this topic (1)
11a. Book Review: The First Word - The Search for the Origins of Language
Posted by: "Robert Karl Stonjek" [EMAIL PROTECTED]   r_karl_s
Tue Dec 11, 2007 3:04 am (PST)
Not the Last Word
Michael C. Corballis
The First Word: The Search for the Origins of Language. Christine Kenneally. x + 357 pp. Viking, 2007. $26.95.

In 1866, the Linguistic Society of Paris famously banned all discussion of the origins of language. The London Philological Society followed suit in 1872. Speculation about the evolution of language remained stifled for a century, and it was only in the 1970s that muted discussion began to emerge, often with an air of apology. Eventually, though, the floodgates opened, and the past two decades have seen a deluge of articles, books and conferences on the topic. The current state of the field is largely one of chaos, to the point that some observers might be tempted to think the ban should be reinstated. Most agree that language is in essence uniquely human, so that evidence as to its evolution remains indirect, and speculation can run wild. Nevertheless, recent advances in genetics, archeology, neurophysiology and computer modeling have provided powerful if sometimes conflicting leads.

Christine Kenneally reviews the current state of the field in her new book. An experienced science journalist with a Ph.D. in linguistics, she is well qualified for the task. The focal point for her discussion is a high-profile symposium on the evolution of language held in 2005 at Stony Brook, New York, where many of the leading players met and gave talks, but her reading and interviews range more widely. The First Word is almost certainly not the last word, but it does provide a lucid, readable, comprehensive account of the different ideas that are now current.

One figure who continues to exert a major if not always benign influence is Noam Chomsky, the dominant linguist of the past half-century, who made a rare appearance at the symposium. It might be said that Chomsky actually helped prolong the ban, because he has long argued that one simply cannot know how language evolved and has even suggested that language may not be the product of natural selection. This position was first explicitly questioned by Steven Pinker and Paul Bloom (who were not present at the symposium but are appropriately included in Kenneally's story) in a classic article published in 1990, in which they retained much of the Chomskyan stance but made a strong case for the incremental evolution of language through natural selection. Much of the subsequent development has been in more direct opposition to Chomsky and seems set to redefine the nature of language itself. As the book relates, Chomsky appeared at the symposium only to give a public address, which I and many of the others in attendance found largely incomprehensible; he arrived and departed without engaging with any of the other speakers.

Chomsky first achieved prominence with his 1957 book Syntactic Structures, which argued that syntax could not be explained in terms of associations. However, theorists such as Simon Kirby and Morten Christiansen have made considerable progress toward developing connectionist theories; language may after all depend on learning principles, and not on some innate language-acquisition device. Sue Savage-Rumbaugh is accorded her rightful place as a pioneer in the effort to discover the linguistic abilities of apes, but the notion of a continuity between ape language and human language remains implacably opposed by those who retain at least vestiges of Chomskyan theory, a group that includes Steven Pinker, Ray Jackendoff and Derek Bickerton.

Kenneally also describes the idiosyncratic work of Luc Steels, who has established artificial robot-inhabited worlds in which languagelike structures arise spontaneously. As Jim Hurford, another prominent symposiast, remarked, we may be witnessing the demise of the Chomskyan notion of universal grammar, the supposedly innate structure, unique to humans and peculiar to language itself, that is said to underlie all languages. Instead, language may depend on more general cognitive abilities.

One complaint I have is that most of the experts discussed in the book seem to equate language with speech. (Kenneally herself writes that speech "is crucial to language.") A partial exception is Michael Arbib, who grounds language evolution in manual gestures, building a scenario in which an intermediate form of communication, which he calls proto-sign, forms the scaffold for the incorporation of vocalization in an ascending spiral toward full syntactic speech. Arbib draws on so-called mirror neurons, first discovered in the monkey brain, which respond both when the subject makes grasping movements and when it observes the same movements made by others. These neurons are found in areas homologous to speech areas in the human brain and seem to provide a natural platform for the evolution of language. Mirror neurons, though, may be an overworked commodity in modern evolutionary and cognitive theory, providing convenient explanations for anything from language to imitation to theory of mind.

Yet language can consist of a combination of hand gestures and facial expressions rather than vocalizations. Even Arbib appears not to recognize that signed languages are indeed true languages; he seemingly confuses them with pantomime. My own view, which I presented at the symposium and which is mentioned in the book, is in fact similar to Arbib's but allows that language might well have evolved as a sophisticated form of ritualized and grammaticalized gesture before the eventual takeover by vocalization; even now manual gestures are woven into our speech. Arbib is quoted as saying "It's hard to build up a rich tradition just through gesture," but a visit to Gallaudet University in Washington, D.C., where the language of instruction is American Sign Language, might persuade him otherwise. Missing from The First Word are sign-language experts such as Ursula Bellugi, Karen Emmorey, David F. Armstrong, Sherman Wilcox and the late William C. Stokoe.

Given the book's breadth of coverage, such an omission is all the more surprising. But that is perhaps a minor quibble. Kenneally's dilemma is that, although she found herself excited by the ideas she encountered, many of them are mutually incompatible, so no clear pattern emerges.Nevertheless, she writes in an engaging, chatty style, and readers will gain a broad understanding of what language is about and how it might have evolved. She ends in a rather gimmicky fashion by asking various researchers their opinion as to whether and how language would evolve in a boatload of babies shipwrecked on the Galápagos Islands but provided with the sustenance to survive and thrive. My response? They might well develop language, but they'd surely all have different theories as to how it happened.

Reviewer Information
Michael C. Corballis is professor of psychology at the University of Auckland and is the author of, among other books, From Hand to Mouth: The Origins of Language (Princeton University Press, 2002) and The Lopsided Ape: Evolution of the Generative Mind (Oxford University Press, 1991).

Source: American Scientist
http://www.americanscientist.org/BookReviewTypeDetail/assetid/56421



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74548350-4a1490

Reply via email to