If there is a WedTech on this thread I would also certainly attend. So I vote that Dave gets busy and leads us toward the light.

Jenny Quillien


On 6/10/2017 8:24 PM, Prof David West wrote:
Hi Nick, hope you are enjoying the east.

The contrast class for "conceptual metaphor" is "embedded metaphor" ala Lakoff, et. al. An example, "the future is in front of us." Unless, of course you speak Aymaran in which case "the future is behind us."

Steve, I do not regularly attend WedTech, but if this thread becomes a featured topic, I certainly would be there.

davew



On Sat, Jun 10, 2017, at 07:35 PM, Nick Thompson wrote:

Hi, Dave,


Thanks for taking the time to lay this out. I wonder what you call the present status of “natural selection” as a metaphor. In this case, the analogues between the natural situation and the pigeon coop remain strong, but most users of the theory have become ignorant about the salient features of the breeding situation. So the metaphor hasn’t died, exactly; it’s been sucked dry of its meaning by the ignorance of its practitioners.


I balk at the idea of a “conceptual metaphor”. It’s one of those terms that smothers its object with love. What is the contrast class? How could a metaphor be other than conceptual? I think the term subtly makes a case for vague metaphors. In my own ‘umble view, metaphors should be as specific as possible. Brain/mind is a case two things that we know almost nothing about are used as metaphors for one another resulting in the vast promulgation of gibberish. Metaphors should sort knowledge into three categories, stuff we know that is consistent with the metaphor, stuff we know that is IN consistent with the metaphor, and stuff we don’t know, which is implied by the metaphor. This last is the heuristic “wet edge” of the metaphor. The vaguer a metaphor, the more difficult it is to distinguish between these three categories, and the less useful the metaphor is. Dawkins “selfish gene” metaphor, with all its phony reductionist panache, would not have survived thirty seconds if anybody had bothered to think carefully about what selfishness is and how it works. See, https://www.researchgate.net/publication/311767990_On_the_use_of_mental_terms_in_behavioral_ecology_and_sociobiologyThTh


This is why it is so important to have something quite specific in mind when one talks of layers. Only if you are specific will you know when you are wrong.


I once got into a wonderful tangle with some meteorologists concerning “Elevated Mixed Layers” Meteorologists insisted that air masses, of different characteristics, DO NOT MIX. It turns out that we had wildly different models of “mixing”. They were thinking of it as a spontaneous process, as when sugar dissolves into water; I was thinking of it as including active processes, as when one substance is stirred into another. They would say, “Oil and water don’t mix.” I would say, “bloody hell, they do, too, mix. They mix every time I make pancakes.” The argument drove me nuts for several years because any fool, watching hard edged thunderheads rise over the Jemez, can plainly see both that the atmosphere is being stirred AND that the most air in the thunderhead is not readily diffusing into the dryer descending air around it. From my point of view, convection is something the atmosphere does, like mixing; from their point of view, convection is something that is DONE TO the atmosphere, like stirring. You get to that distinction only by thinking of very specific examples of mixing as you deploy the metaphor.


Nick


Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/ <http://home.earthlink.net/%7Enickthompson/naturaldesigns/>


*From:*Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Prof David West
*Sent:* Saturday, June 10, 2017 11:36 AM
*To:* friam@redfish.com
*Subject:* Re: [FRIAM] Model, Metaphor, Analogy


long long ago, my master's thesis in computer science and my phd dissertation in cognitive anthropology dealt extensively with the issue of metaphor and model, specifically in the area of artificial intelligence and cognitive models of "mind." the very first academic papers I published dealt with this issue (They were in AI MAgazine, the 'journal of record' in the field at the time.


My own musings were deeply informed by the work of Earl R. MacCormac: /A Cognitive Theory of Metaphor/ and /Metaphor and Myth in Science and Religion./


MacCormac argues that metaphor 'evolves' from "epiphor" the first suggestion that something is like something else to either "dead metaphor" or "lexical term" depending on the extent to which referents suggested by the first 'something' are confirmed to correlate to similar referents in the second "something." E.G. an atom is like a solar system suggests that a nucleus is like the sun and electrons are like planets plus orbits are at specific intervals and electrons can be moved from one orbit to another by adding energy (acceleration) just like any other satellite. As referents like this were confirmed the epiphor became a productive metaphor and a model, i.e. the Bohr model. Eventually, our increasing knowledge of atoms and particle/waves made it clear that the model/metaphor was 'wrong' in nearly every respect and the metaphor died. Its use in beginning chemistry suggests that it is still a useful tool for metaphorical thinking; modified to "what might you infer/reason, if you looked at an atom _as if_ it were a tiny solar system."


In the case of AI, the joint epiphors — the computer is like a mind, the mind is like a computer — should have rapidly become dead metaphors. Instead they became models "physical symbol system" and most in the community insisted that they were lexical terms (notably Pylyshyn, Newell, and Simon). To explain this, I added the idea of a "paraphor" to MacCormac's evolutionary sequence — a metaphor so ingrained in a paradigm that those thinking with that paradigm cannot perceive the obvious failures of the metaphor.


MacCormac's second book argues for the pervasiveness of the use and misuse of metaphor and its relationship to models (mathematical and iillustrative) in both science and religion. The "Scientific Method," the process of doing science, is itself a metaphor (at best) that should have become a dead metaphor as there is abundant evidence that 'science' is not done 'that way' but only after the fact as if it had been done that way. In an Ouroborosian twist, even MacCormac;s theory of metaphor is itself a metaphor.


If this thread attracts interest, I think the work of MacCormac would provide a rich mine of potential ideas and a framework for the discussion. Unfortunately, it mostly seems to be behind pay walls — the books and JSTOR or its ilk.


dave west




On Fri, Jun 9, 2017, at 03:11 PM, Steven A Smith wrote:

    I meant to spawn a fresh proto-thread here, sorry.


        Given that we have been splitting hairs on terminology, I
        wanted to at least OPEN the topic that has been grazed over
        and over, and that is the distinction between Model,
        Metaphor, and Analogy.


        I specifically mean


         1. Mathematical Model
            <https://en.wikipedia.org/wiki/Mathematical_model>
         2. Conceptual Metaphor
            <https://en.wikipedia.org/wiki/Conceptual_metaphor>
         3. Formal Analogy <https://en.wikipedia.org/wiki/Analogy>

        I don't know if this narrows it down enough to discuss but I
        think these three terms have been bandied about loosely and
        widely enough lately to deserve a little more explication?

        I could rattle on for pages about my own
        usage/opinions/distinctions but trust that would just pollute
        a thread before it had a chance to start, if start it can.

        A brief Google Search gave me THIS reference which looks
        promising, but as usual, I'm not willing to go past a paywall
        or beg a colleague/institution for access (I know LANL's
        reference library will probably get this for me if I go in
        there!).

        
http://www.blackwellreference.com/public/tocnode?id=g9780631221081_chunk_g97806312210818





        ============================================================

        FRIAM Applied Complexity Group listserv

        Meets Fridays 9a-11:30 at cafe at St. John's College

        to unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com

        FRIAM-COMIChttp://friam-comic.blogspot.com/  by Dr. Strangelove


    ============================================================

    FRIAM Applied Complexity Group listserv

    Meets Fridays 9a-11:30 at cafe at St. John's College

    to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

    FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to