On 13 Jan 2014, at 20:27, meekerdb wrote:

On 1/13/2014 7:17 AM, Gabriel Bodeen wrote:


On Friday, January 10, 2014 8:17:13 PM UTC-6, Brent wrote:
On 1/10/2014 10:49 AM, Gabriel Bodeen wrote:
On Tuesday, December 31, 2013 4:25:04 PM UTC-6, Brent wrote:
As you've explained it above your theory makes a rock just as conscious as a brain. I'm sure you must have a more subtle theory than that, so I'll ask you the same thing I asked Bruno, if I make a robot what do I have to do make it conscious or not conscious?

Brent

Did you receive any interesting answers?

Hm, should I take that as a negative answer, or merely as a skipped question?

I didn't get any answer from Mr. Owen. Bruno's answer is that the robot has to be Lobian, i.e. can do proofs by transfinite induction.

Normal induction is enough.

You can even limit induction on the decidable (sigma_0) formula, but then you have to add the exponential axioms:

x^0 = 1
x^s(y) = x *(x^y)

Those exponential axioms are not provable from addition and multiplication without induction on at least all semi-decidable (RE, sigma_1) formula.






I have adequate background in neuroscience but I'm mostly ignorant of AI math, robotics work, and philosophy of mind, so excuse my rampant speculation. This is what I'd try in design of a robotic brain to switch on and off consciousness and test for its presence: First, I'd give the robot brain modules to interpret its sensory inputs in an associative manner analogous to human sensory associative regions. All these sensory inputs would feed into the decision-making module (DMM). One of the first steps taken by the DMM is determining how important each sensory signal is for its current objectives. It decides to pay attention to a subset of those signals.
So is it conscious of those signals?  How does it decide?

1: As described in the next two sentences of the original paragraph, no. 2: The choice of function used to select the subset is unimportant to the experiment, but if we were aiming for biomimicry then each sensory module would report a degree of stimulation, and attention function would block all signals but the most stimulated 1 to 7.
Second, I'd put a switch on another input to make it part of the attention subset or not:
What other input would you put a switch on? What inputs are there besides sensory? I think you've assumed "conscious" = "self aware". Is one conscious when one is "lost in thought"?

1: The switch would go on the signals described in the second half of the sentence that you hastily cut in half. :D 2: Inputs besides sensory associations are important to a functioning robot but not, I predict, to a robot designed only to test for consciousness. 3: I chose to address the specific matter of qualia rather than all of what people mean by "conscious", as described in the "I predict this because..." sentence of the original paragraph. :D 4: I suspect that the human experience of being lost in thought differs between specific cases. Most times for me that I'd call "lost in thought" I can still operate (drive, walk, eat) on "auto- pilot" which undoubtedly requires my senses to be engaged, but afterwards the only things I can recall experiencing are the thoughts I was lost in. Introspective evidence and memory being as bad as they are, that shouldn't be taken as a necessarily correct description. But if it is a correct description, then by my definitions in the original paragraph, I'd say that I was conscious. But if what you mean by "conscious" includes awareness of surroundings, then no, I was not conscious under that definition.

Yes, it seems there are different levels and kinds of consciousness: perception of the external world, perception of one's body, modeling one's place in the external world, being aware of one's thoughts (although I think this is over rated), feelings of empathy,...


the attention's choice of signals would also an input to the DMM, and I could turn on or off whether that attentional choice was itself let pass through to the next processing stages. I would predict that, with the switch turned off, the robot would be not conscious (i.e. it would have no experience of qualia), but that with the switch turned on, the robot would be conscious (i.e. it would experience qualia corresponding to the signals it is paying attention to). I predict this because it seems to me that the experience of qualia can be described as being simultaneously aware of a sensory datum and (recursively) aware of being aware of it. If the robot AI was sufficiently advanced that we could program it to talk about its experiences, the test of my prediction would be that, with the switch off, the robot would talk about what it sees and hears, and that with the switch on, the robot would also talk about fact that it knew it was seeing and hearing things.

So is a Mars Rover conscious because it processes video from it's camera to send to JPL, AND it senses that its camera is powered and working and that its transmitter is working AND it reports those internal status variables to JPL too.

If there are two separate inputs to the transmitter, "the video feed" and "the camera is functional", then this does not satisfy the relationship I described and consequently I would predict no consciousness (of the video feed by the Mars Rover). However, that should be possible to change. The Mars Rover is, I think, semi- autonomous, meaning it is programmed to make certain decisions on its own. I'll suppose a scenario in which JPL instructs the Rover to advance toward a nifty-looking rock, but leaves the details of that operation to the Rover's programming. Then the Rover examines the video feed, identifies the pertinent rock in the video feed, and advances toward it. As it does so, it uses the video feed and the part of the video image identified as rock to continually recalculate and adjust which part of the video feed it is identifying as the rock. That scenario matches the one I described previously so I would predict that the Rover would then be conscious (of the rock).

I think that's the way it works, and it is also aware of a lot of parameters describing itself, e.g. its position on the surface, temperature of modules, power supply voltages and currents,...which one could equate to a feeling of health.


The Rover would still not be self-conscious (i.e. conscious of its self) in that scenario. If we wanted to build that kind of consciousness, then I predict we'd need a different set-up. A robot programmed to move so as to prevent anything from touching its robot body would need to be given a definition of what counts as its body. Then I think it would count as self-conscious. However, if you want something still deeper, like psychological self-consciousness (i.e. consciousness of its own psychological state), then you might have to build a robot and program it using quining or something like that -- I'm not sure, as this ventures far enough into AI math that I know my intuitions are a very bad guide.

That's where Bruno would invoke the ability to prove that not all true statements are provable.


Hmm... OK. (for now :)

Bruno




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to