On 26 Oct 2012, at 21:14, meekerdb wrote:

On 10/26/2012 5:57 AM, Bruno Marchal wrote:


On 25 Oct 2012, at 07:10, meekerdb wrote:

On 10/24/2012 9:23 PM, Craig Weinberg wrote:

Or what if we don't care? We don't care about slaughtering cattle, which are pretty smart as computers go. We manage not to think about starving children in Africa, and they *are* humans. And we ignore the looming disasters of oil depletion, water pollution, and global
warming which will beset humans who are our children.

Sure, yeah I wouldn't expect mainstream society to care, except maybe for some people, I am mainly focused on what seems to be like an astronomically unlikely prospect that we will someday find it possible to make a person out of a program, but won't be able to just make the program itself and no person attached.

Right. John McCarthy (inventor of LISP) worried and wrote about that problem decades ago. He cautioned that we should not make robots conscious with emotions like humans because then it would be unethical use them like robots.

I doubt we will have any choice in the matter. I think that intelligence is a purely emotional state,

I don't know what a 'purely emotional' state would be? One with affect but not content?

It has an implicit content, like a sort of acceptation to die or be defeated. Stupidity usually denies this, unconsciously. The emotion involved is a kind of fear related with the existence/non-existence apprehension. Anyone can become intelligent in one second, or stupid in one second, and intelligence is what can change the competence, there is a sort of derivative relation between competence and intelligence.




and that we can't separate it from the other emotion. They will be conscious and have emotions, for economical reasons only. Not human emotion, but humans' slave emotions.

Isn't that what I said McCarthy warned about. If we make a robot too intelligent, e.g. human like intelligence, it will necessarily have feelings that we should ethically take into account.

Yes.
And then there is Minski warning, which is that we must be happy if the machine will still use us as pets. I don't think we will be able to control anything about this. Like with drugs, prohibition will always accelerate the things, with less control, in the underground.



No reason to worry, it will take some time, in our branches of histories.




Especially given that we have never made a computer program that can do anything whatsoever other than reconfigure whatever materials are able to execute the program, I find it implausible that there will be a magical line of code which cannot be executed without an experience happening to someone.

So it's a non-problem for you. You think that only man-born-of- woman or wetware can be conscious and have qualia. Or are you concerned that we are inadvertently offending atoms all the time?

No matter how hard we try, we can never just make a drawing of these functions just to check our math without invoking the power of life and death. It's really silly. It's not even good Sci-Fi, it's just too lame.

I think we can, because although I like Bruno's theory I think the MGA is wrong, or at least incomplete.

OK. Thanks for making this clear. What is missing?



I think the simulated intelligence needs a simulated environment, essentially another world, in which to *be* intelligent.

But in arithmetic you have all simulation possible. The UD for example does simulate all the solutions of QM+GR, despite the "real QM+GR" emerges from all computations. So you have the simulated in their simulated environment (and we have to explain why something like GR+QM win the "universal machines battle".

I agree. But the MGA is used in a misleading way to imply that the environment is merely physics and isn't needed, whereas I think it actually implies that all (or a lot) of physics is needed and must be part of the simulation. This related to Saibal's view that the all the counterfactuals are present in the wf of the universe.

But it is present in arithmetic too, and we have to explain the apparent physics from that. I am not sure where MGA is misused, as the whole thing insist that physics must be present, and yet that we cannot postulate it as far as the goal is to solve the mind body problem (and not taking vacation in Spain, or doing a cup of coffee).

Bruno



Brent





And that's where your chalk board consciousness fails. It needs to be able to interact within a chalkboard world. So it's not just a question of going to a low enough level, it's also a question of going to a high enough level.

OK (as a rely to Craig's point).

Bruno



Brent
The person I was when I was 3 years old is dead. He died because
too much new information was added to his brain.
         -- Saibal Mitra

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com . To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything- l...@googlegroups.com. To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to