Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mike Tintner
Colin: right or wrong...I have a working physical model for consciousness. Just so. Serious scientific study of consciousness entails *models* not verbal definitions. The latter are quite hopeless. Richard opined that there is a precise definition of the hard problem of consciousness. There

[agi] The New World Order

2008-11-17 Thread Mike Tintner
Comment on Marketwatch forum today: Lots of talk about the New World Order (MWO)... what really bothers me about the NWO is that there are bound to be lots of robots involved. I hate robots. --- agi Archives:

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote: I wrote: I think the reason that the hard question is interesting at all is that it would presumably be OK to torture a zombie because it doesn't actually experience pain, even though it would react exactly like a human being

Re: [agi] The New World Order

2008-11-17 Thread Bob Mottram
2008/11/17 Mike Tintner [EMAIL PROTECTED]: Comment on Marketwatch forum today: Lots of talk about the New World Order (MWO)... what really bothers me about the NWO is that there are bound to be lots of robots involved. I hate robots. The way I look at it, once we have robots with

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
How do you propose grounding ethics? Ethics is building and maintaining healthy relationships for the betterment of all. Evolution has equipped us all with a good solid moral sense that frequently we don't/can't even override with our short-sighted selfish desires (that, more frequently

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
John G. Rose wrote: From: Richard Loosemore [mailto:[EMAIL PROTECTED] Three things. First, David Chalmers is considered one of the world's foremost researchers in the consciousness field (he is certainly now the most celebrated). He has read the argument presented in my paper, and he has

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Sorry to be negative, but no, my proposal is not in any way a modernization of Peirce's metaphysical analysis of awareness. Could you elaborate the difference? It seems very similar to me. You're saying that consciousness has to do with the bottoming-out of

Zombies, Autism and Consciousness {WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore
Trent Waddington wrote: Richard, After reading your paper and contemplating the implications, I believe you have done a good job at describing the intuitive notion of consciousness that many lay-people use the word to refer to. I don't think your explanation is fleshed out enough for those

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Benjamin Johnston wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: Hi Richard, I don't have any comments yet about what you have written, because I'm not sure I fully understand

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness. Predictions 1-3 are something that my hardware can do easily. In fact that kind

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: How do you propose testing whether a model is correct or not? By determining whether it is useful and predictive -- just like what we always do when we're practicing science (as opposed to spouting BS). An ethical model tells you

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: What I am claiming (and I will make this explicit in a revision of the paper) is that these notions of explanation, meaning, solution to the problem, etc., are pushed to their breaking point by the problem of consciousness. So

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:  For example, in fifty years, I think it is quite possible we will be able to say with some confidence if certain machine intelligences we design are conscious nor not, and whether their pain is as real as the pain of another type of

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the boundary of the analysis

Dan Dennett [WAS Re: [agi] A paper that actually does solve the problem of consciousness]

2008-11-17 Thread Richard Loosemore
Ben Goertzel wrote: Ed, BTW on this topic my view seems closer to Richard's than yours, though not anywhere near identical to his either. Maybe I'll write a blog post on consciousness to clarify, it's too much for an email... I am very familiar with Dennett's position on consciousness, as

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
I have no doubt that if you did the experiments you describe, that the brains would be rearranged consistently with your predictions. But what does that say about consciousness? What are you asking about consciousness? - Original Message - From: Matt Mahoney [EMAIL PROTECTED] To:

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
On 11/14/2008 9:27 AM, Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf Good paper. A

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Matt Mahoney wrote: --- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote: Okay, let me phrase it like this: I specifically say (or rather I should have done... this is another thing I need to make more explicit!) that the predictions are about making alterations at EXACTLY the

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Harry Chesley wrote: On 11/14/2008 9:27 AM, Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at:

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: No it won't, because people are free to decide what makes pain real. What? You've got to be kidding . . . . What makes pain real is how the sufferer reacts to it -- not some abstract wishful thinking that we use to justify our

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt, First, it is not clear people are free to decide what makes pain real, at least subjectively real. If I zap you will a horrible electric shock of the type Sadam Hussein might have used when he was the chief interrogator/torturer of Iraq's Baathist party, it is not clear exactly how much

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
An excellent question from Harry . . . . So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and analyze them, or (b)

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Ben Goertzel
Thanks Richard ... I will re-read the paper with this clarification in mind. On the face of it, I tend to agree that the concept of explanation is fuzzy and messy and probably is not, in its standard form, useful for dealing with consciousness However, I'm still uncertain as to whether your

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: First, it is not clear people are free to decide what makes pain real, at least subjectively real. I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle (the standard way to draw blood) even though it

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Eric Burton
There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon On 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Mark Waser [EMAIL PROTECTED] wrote: Autobliss responds to pain by changing its behavior to make it less likely. Please explain how this is different from human suffering. And don't tell me its because one is human and the other is a simple program, because... Why

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt, With regard to your first point I largely agree with you. I would, however, qualify it with the fact that many of us find it hard not to sympathize with people or animals, such as a dog, under certain circumstances when we directly sense outward manifestations that they are experiencing

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Trent Waddington [EMAIL PROTECTED] wrote: On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Eric Burton [EMAIL PROTECTED] wrote: There are procedures in place for experimenting on humans. And the biologies of people and animals are orthogonal! Much of this will be simulated soon When we start simulating people, there will be ethical debates about that. And

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
Before you can start searching for consciousness, you need to describe precisely what you are looking for. -- Matt Mahoney, [EMAIL PROTECTED] --- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: From: Ed Porter [EMAIL PROTECTED] Subject: RE: FW: [agi] A paper that actually does solve

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote: Harry Chesley wrote: A related question: How do you explain the fact that we sometimes are aware of qualia and sometimes not? You can perform the same actions paying attention or on auto pilot. In one case, qualia manifest, while in the other they do not. Why is that?

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Harry Chesley
Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point: Although this is a

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Matt, Matt, Although different people (or even the same people at different times) define consciousness differently, there as a considerable degree of overlap. I think a good enough definition to get started with is that which we humans feel our minds are directly aware of, including

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 9:03 AM, Ed Porter [EMAIL PROTECTED] wrote: I think a good enough definition to get started with is that which we humans feel our minds are directly aware of, including awareness of senses, emotions, perceptions, and thoughts. (This would include much of what Richard

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mike Tintner
[so who's near Berkeley to report back?]: UC Berkeley Cognitive Science Students Association presents: Pain and the Brain Wednesday, November 19th 5101 Tolman Hall 6 pm - 8 pm UCSF neuroscienctist Dr. Howard Fields and Berkeley philosopher John Searle represent some of the most

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Harry Chesley wrote: Richard Loosemore wrote: I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf One other point:

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Mark Waser wrote: An excellent question from Harry . . . . So when I don't remember anything about those towns, from a few minutes ago on my road trip, is it because (a) the attentional mechanism did not bother to lay down any episodic memory traces, so I cannot bring back the memories and

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
Trent, No, it is not easy to implement. I am talking about the type of awareness that we humans have when we say we are conscious of something. Some of the studies we have on the neural correlates of consciousness indicate humans only report being consciously aware of things that receive

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 10:21 AM, Ed Porter [EMAIL PROTECTED] wrote: I am talking about the type of awareness that we humans have when we say we are conscious of something. You must talk to different humans to me. I've not had anyone use the word conscious around me in decades.. and usually

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Matt Mahoney
--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote: I think a good enough definition to get started with is that which we humans feel our minds are directly aware of, including awareness of senses, emotions, perceptions, and thoughts. You are describing episodic memory, the ability to recall

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales
Richard Loosemore wrote: Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness. Predictions 1-3 are something that my hardware can

[agi] How much/little qualia do you need to be conscious

2008-11-17 Thread Robert Swaine
Richard, This is probably covered elsewhere, but help me on this, just some thoughts at the end. Many humans don't share the full complement of sensory apparatus: blind, deaf, cannot feel pain, taste, vestibular sense of motion, body sensation, etc; either through damage or congenitally. So

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
This is a subject on which I have done a lot of talking to myself, since as Richard's paper implies, our own subjective experiences are among the most real things to us. And we have the most direct access to our own consciousness, and is since of richness, simultaneity, and meaning. I am also

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Ed Porter
See the post I just sent to Matt Mahoney. You have a much greater access to your own memory than just high level episodic memory. Although your memories of such experience are more limited than their actual experience, you can remember qualities about them, that include their sense of richness,

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Sorry for the late reply. Got interrupted. Vladimir Nesov wrote: (I'm sorry that I make some unclear statements on semantics/meaning, I'll probably get to the description of this perspective later on the blog (or maybe it'll become obsolete before that), but it's a long story, and writing

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Richard Loosemore
Colin Hales wrote: Richard Loosemore wrote: Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness. Predictions 1-3 are something

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mike Tintner
Colin:Qualia generation has been highly localised into specific regions in cranial brain material already. Qualia are not in the periphery. Qualia are not in the spinal CNS, Qualia are not in the cranial periphery eg eyes or lips Colin, This is to a great extent nonsense. Which

[agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-17 Thread Colin Hales
Mike Tintner wrote: Colin:Qualia generation has been highly localised into specific regions in *cranial *brain material already. Qualia are not in the periphery. Qualia are not in the spinal CNS, Qualia are not in the cranial periphery eg eyes or lips Colin, This is to a great extent

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-17 Thread Mike Tintner
Colin:YESBrains don't have their own sensors or self-represent with a perceptual field. So what? That's got nothing whatever to do with the matter at hand. CUT cortex and you can kill off what it is like percepts out there in the body (although in confusing ways). Touch appropriate exposed

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Colin Hales
Richard Loosemore wrote: Colin Hales wrote: Richard Loosemore wrote: Colin Hales wrote: Dear Richard, I have an issue with the 'falsifiable predictions' being used as evidence of your theory. The problem is that right or wrong...I have a working physical model for consciousness.

Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-17 Thread Trent Waddington
On Tue, Nov 18, 2008 at 2:50 PM, Mike Tintner [EMAIL PROTECTED] wrote: Intelligence was clearly at first *distributed* through a proto-nervous system throughout the body. Watch a sea anemone wait and then grab, and then devour a fish that approaches it and you will be convinced of that. The