[Fis] a short survey on the “mental models”
Dear FIS Colleagues, The Plato’s allegory about prisoners in the cave (maybe!) is one of the first attempts to pay attention to consciousness models [Plato, 2002, Book VII, p. 373]. Let remember that the best candidate for such kind of prisoner is the brain, including ones of all kinds of Infoses. (To avoid misunderstandings with concepts Subject, agent, animal, human, society, humanity, living creatures, etc., we use the abstract concept “INFOS” to denote every of them as well as all of artificial creatures which has features similar to the former ones [Markov et al, 2007]). There are at least two types of models created by and in the Infos’ consciousness - isomorphic (correspond) to the structure of input from the sensors (called in cognitive science “mental models” [Johnson-Laird, 1983]) and not isomorphic (textual in any language) (called “deductive, analytic, or logical models” [Wittgenstein, 1922]). Both models are very important but the second type (deductive) exists only at the high level and very complex organized Infoses (humans, societies, humanity). For deductive modeling one needs a language as a tool for modeling. Maybe some animals have some language possibilities but they are not enough for deductive modeling. Now I shall continue with a short survey on the “mental models”. In the next post I shall discuss the deductive models. For humans, the mental models are psychological representations of real, hypothetical, or imaginary situations. The mental model theory was established by Philip Johnson-Laird in [Johnson-Laird, 1983] and has proven extremely powerful in predicting and explaining higher-level cognition in humans [MMRW, 2018]. For other types of Infoses, the mental models correspond to the level of consciousness organization, for instance art is a kind of “social mental model”. In 1896, the American philosopher Charles Sanders Peirce had postulated that reasoning is a process by which a human: “examines the state of things asserted in the premises, forms a diagram of that state of things, perceives in the parts of the diagram relations not explicitly mentioned in the premises, satisfies itself by mental experiments upon the diagram that these relations would always subsist, or at least would do so in a certain proportion of cases, and concludes their necessary, or probable, truth.” [Peirce, 1896]. In Wittgenstein’s “picture” theory of the meaning of language, mental models have a structure that corresponds to the structure of what they represent [Wittgenstein, 1922]. They are accordingly akin to architects’ models of buildings, to molecular biologists’ models of complex molecules, and to physicists’ diagrams of particle interactions. In 1943, the Scottish psychologist Kenneth Craik had proposed a similar idea: “... human thought has a definite function; it provides a convenient small-scale model of a process so that we can, for instance, design a bridge in our minds and know that it will bear a train passing over it instead of having to conduct a number of full-scale experiments; and the thinking of animals represents on a more restricted scale the ability to represent, say, danger before it comes and leads to avoidance instead of repeated bitter experience” [Craik, 1943, page 59]. “If the organism carries a 'small-scale model' of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilize the knowledge of past events in dealing with the present and future, and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it” [Craik, 1943, page 61]. Since Craik’s insight, cognitive scientists have argued that the mind constructs mental models as a result of perception, imagination and knowledge, and the comprehension of discourse. They study how children develop such models, how to design artifacts and computer systems for which it is easy to acquire a model, how a model of one domain may serve as analogy for another domain, and how models engender thoughts, inferences, and feelings [MMRW, 2018]. To be continued... Friendly greetings Krassimir References [Craik, 1943] Kenneth James Williams Craik . The Nature of Explanation. Cambridge: Cambridge University Press (1943) . Reprinted: October 1967, ISBN: 9780521094450. 136 pages. http://www.cambridge.org/us/academic/subjects/psychology/cognition/nature-explanation?format=PB&isbn=9780521094450#cM4ptICCc6vUTlK0.97 [Johnson-Laird, 1983] Mental Models. Cambridge: Cambridge University Press. Cambridge, Mass.: Harvard University Press, 1983. Italian translation by Alberto Mazzocco, Il Mulino, 1988. Japanese translation, Japan UNI Agency,1989. [Johnson-Laird, 1995] Philip N. Johnson-Laird. Mental models, deductive reasoning, and the brain. (1995) In Gazzaniga
Re: [Fis] a short survey on the “mental models”
Dear Krassimir, I agree with you. In our framework, your second type (deductive) exists only at the high DIMENSIONAL level of the brain. When I see a three-dimensional cat, my mind adds to the 3D picture other features (we call them dimensions), such as: I start to think that its name is Jack, it is a feline, it is nice and tender, and so on. The only difference from your account is that, according to our framework, we can (leaving apart the human language, that is something more subjective) physically quantify such higher dimensions. And we tried to demonstrate how such process is feasible: https://link.springer.com/article/10.1007/s11571-017-9428-2 Ciao! > Il 17 marzo 2018 alle 17.59 Krassimir Markov ha scritto: > > Dear FIS Colleagues, > > The Plato’s allegory about prisoners in the cave (maybe!) is one of the > first attempts to pay attention to consciousness models [Plato, 2002, Book > VII, p. 373]. Let remember that the best candidate for such kind of prisoner > is the brain, including ones of all kinds of Infoses. (To avoid > misunderstandings with concepts Subject, agent, animal, human, society, > humanity, living creatures, etc., we use the abstract concept “INFOS” to > denote every of them as well as all of artificial creatures which has > features similar to the former ones [Markov et al, 2007]). > > There are at least two types of models created by and in the Infos’ > consciousness - isomorphic (correspond) to the structure of input from the > sensors (called in cognitive science “mental models” [Johnson-Laird, 1983]) > and not isomorphic (textual in any language) (called “deductive, analytic, or > logical models” [Wittgenstein, 1922]). > > Both models are very important but the second type (deductive) exists > only at the high level and very complex organized Infoses (humans, societies, > humanity). For deductive modeling one needs a language as a tool for > modeling. Maybe some animals have some language possibilities but they are > not enough for deductive modeling. > > Now I shall continue with a short survey on the “mental models”. > In the next post I shall discuss the deductive models. > > For humans, the mental models are psychological representations of real, > hypothetical, or imaginary situations. > The mental model theory was established by Philip Johnson-Laird in > [Johnson-Laird, 1983] and has proven extremely powerful in predicting and > explaining higher-level cognition in humans [MMRW, 2018]. > > For other types of Infoses, the mental models correspond to the level of > consciousness organization, for instance art is a kind of “social mental > model”. > > In 1896, the American philosopher Charles Sanders Peirce had postulated > that reasoning is a process by which a human: “examines the state of things > asserted in the premises, forms a diagram of that state of things, perceives > in the parts of the diagram relations not explicitly mentioned in the > premises, satisfies itself by mental experiments upon the diagram that these > relations would always subsist, or at least would do so in a certain > proportion of cases, and concludes their necessary, or probable, truth.” > [Peirce, 1896]. > > In Wittgenstein’s “picture” theory of the meaning of language, mental > models have a structure that corresponds to the structure of what they > represent [Wittgenstein, 1922]. They are accordingly akin to architects’ > models of buildings, to molecular biologists’ models of complex molecules, > and to physicists’ diagrams of particle interactions. > > In 1943, the Scottish psychologist Kenneth Craik had proposed a similar > idea: > “... human thought has a definite function; it provides a convenient > small-scale model of a process so that we can, for instance, design a bridge > in our minds and know that it will bear a train passing over it instead of > having to conduct a number of full-scale experiments; and the thinking of > animals represents on a more restricted scale the ability to represent, say, > danger before it comes and leads to avoidance instead of repeated bitter > experience” [Craik, 1943, page 59]. > “If the organism carries a 'small-scale model' of external reality and of > its own possible actions within its head, it is able to try out various > alternatives, conclude which is the best of them, react to future situations > before they arise, utilize the knowledge of past events in dealing with the > present and future, and in every way to react in a much fuller, safer, and > more competent manner to the emergencies which face it” [Craik, 1943, page > 61]. > > Since Craik’s insight, cognitive scientists have argued that the mind > constructs mental models as a result of perception, imagination and > knowledge, and the comprehension of discourse. They study how children > develop s