On Saturday, September 22, 2012 11:55:35 AM UTC-4, Bruno Marchal wrote: > > > On 22 Sep 2012, at 17:08, Craig Weinberg wrote: > > > > On Saturday, September 22, 2012 9:10:30 AM UTC-4, Bruno Marchal wrote: >> >> >> On 21 Sep 2012, at 22:48, Craig Weinberg wrote: >> >> Post from my blog: >> >> Simple as that, really. From psychological discoveries of the >> subconscious and unconscious, to cognitive bias and logical fallacies, to >> quasi-religious faith in artificial intelligence, we seem to have a mental >> blind spot for emotional realities. >> >> What could be more human than making emotional mistakes or having one’s >> judgment cloud over because of favoritism or prejudice? Yet when it comes >> to assessing the feasibility of a sentient being composed of programmed >> functions, we tend to miss entirely this little detail: Personal >> preference. Opinion. Bias. It doesn’t bother us that machines completely >> lack this dimension and in all cases exhibit nothing but impersonal >> computation. This tends to lead the feel-blind intellect to unknowingly >> bond to the computer. The consistency of an automaton’s function is >> comforting to our cognitive self, who longs to be free of emotional bias, >> so much so that it is able to hide that longing from itself and project the >> clean lines of perfect consequences outward onto a program. >> >> It’s not that machines aren’t biased too - of course they are incredibly >> biased toward the most literal interpretations possible, but they are all >> biased in the same exact way so that is seems to us a decent tradeoff. The >> rootless consciousness of the prefrontal cortex thinks that is a small >> price to pay, and one which will inevitably be mitigated with improvements >> in technology. In its crossword puzzle universe of Boolean games, something >> like a lack of personhood or feeling is a minor glitch, an aesthetic ‘to be >> continued’ which need only be set aside for now while the more important >> problems of function can be solved. >> >> It seems that the ocean of feelings and dreams which were tapped into by >> Freud, Jung, and others in the 20th century have been entirely dismissed in >> favor of a more instrumental approach. Simulation of behaviors. Turing >> machine emulation. This approach has the fatal flaw of drawing the mind >> upside down, with intellect and logic at the base that builds up to complex >> mimicry of mood and inflection. The mind has an ego and doesn’t know it. >> Thinking has promoted itself to a cause of feeling and experience rather >> than a highly specialized and esoteric elaboration of personhood. >> >> We can see this of course in developmental psychology and anthropology. >> Babies don’t come out of the womb with a flashing cursor, ready to accept >> programming passively. Primitive societies don’t begin with impersonal >> state bureaucracies and progress to chiefdoms. We seem to have to learn >> this lesson again and again that our humanity is not a product of strategy >> and programming, but of authenticity and direct participation. >> >> When people talk about building advanced robots and computers which will >> be indistinguishable from or far surpass human beings, they always seem to >> project a human agenda on them. We define intelligence outside of ourselves >> as that which serves a function to us, not to the being itself. This again >> suggests to me the reflective quality of the mind, of being blinded by the >> reflection of our own eyes in our sunglasses. Thoughts have a hard time >> assessing the feeling behind themselves, and an even harder time admitting >> that it matters. >> >> I think we see this more and more in all areas of our lives - an >> overconfidence in theoretical approaches and a continuous disconnecting >> with the results. We keep hoping that it will work this time, even though >> we probably know that it never will. It’s as if our collective psyche is >> waiting for our deluded minds to catch up. Waiting for us to figure out >> that in spite of the graphs and tests and retooling, the machine is really >> not working any better. >> >> >> You are right. We have very often dismissed emotion, feelings and >> consciousness in human. >> >> Unfortunately, dismissing emotion feelings and consciousness in machine, >> will not help. >> >> Bruno >> >> > You don't see a connection between the two? There is no chance of machine > feelings being a psychological projection? > > > There is. But as far as we are concern with the "emotion dismissing" > problem, projecting emotion them, when they behave in some way, will be > less dismissing emotion that attribuating puppetness by decision. > > Why would it be any less dismissive? You just have the opposite problem of Chalmers paper: Spontaneously present and advancing qualia. If someone writes a program that draws Bugs Bunny, as that program is improved to respond to other drawings of Elmer Fudd and Daffy Duck, and to talk like Bugs Bunny, you would have to have feelings and thoughts begin to appear and gradually become more real. Bugs Bunny would have to feel himself and his world as the faintest hint of non-zombie, with sudden infusions of realism and phenomenology coinciding with each software upgrade.