Stathis Papaioannou wrote:
> On 04/06/07, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>* 
>     See you haven't understood my definitions.  It may be my fault due to
>     the way I worded things.  You are of course quite right that: 'it's
>     possible to correctly reason about cognitive systems at least well
>     enough to predict their behaviour to a useful degree and yet not care
>     at all about what happens to them'.  But this is only pattern
>     recognition and symbolic intelligence, *not* fully reflective
>     intelligence.  Reflective intelligence involves additional
>     representations enabling a system to *integrate* the aforementioned
>     abstract knowledge (and experience it directly as qualia).    Without
>     this ability an AI would be unable to maintain a stable goal structure
>     under recursive self improvement and therefore would remain limited.
> Are you saying that a system which has reflective intelligence would be 
> able to in a sense emulate the system it is studying, and thus 
> experience a very strong form of empathy? That's an interesting idea, 
> and it could be that very advanced AI would have this ability; after 
> all, humans have the ability for abstract reasoning which other animals 
> almost completely lack, so why couldn't there be a qualitative (or 
> nearly so) rather than just a quantitative difference between us and 
> super-intelligent beings?
> However, what would be wrong with a super AI that just had large amounts 
> of pattern recognition and symbolic reasoning intelligence, but no 
> emotions at all? 

Taken strictly, I think this idea is incoherent.  Essential to intelligence is 
taking some things as more important than others.  That's the difference 
between data collecting and theorizing.  It is a fallacy to suppose that 
emotion can be divorced from reason - emotion is part of reason.  An 
interesting example comes from attempts at mathematical AI.  Theorem proving 
programs have been written and turned loose on axiom systems - but what results 
are a lot of theorems that mathematicians judge to be worthless and trivial.

Otherwise I entirely agree with Stathis.

>It could work as the ideal disinterested scientist, 
> doing theoretical physics without regard for its own or anyone else's 
> feelings. You would still have to say that it was super-intelligent, 
> even though it it is an idiot from the reflective intelligence 
> perspective. It also would pose no threat to anyone because all it wants 
> to do and all it is able to do is solve abstract problems, and in fact I 
> would feel much safer around this sort of AI than one that has real 
> power and thinks it has my best interests at heart.
> Secondly, I don't see how the ability to fully empathise would help the 
> AI improve itself or maintain a stable goal structure. Adding memory and 
> processing power would bring about self-improvement, perhaps even 
> recursive self-improvement if it can figure out how to do this more 
> effectively with every cycle, and yet it doesn't seem that this would 
> require the presence of any other sentient beings in the universe at 
> all, let alone the ability to empathise with them.
> Finally, the majority of evil in the world is not done by psychopaths, 
> but by "normal" people who are aware that they are causing hurt, may 
> feel guilty about causing hurt, but do it anyway because there is a 
> competing interest that outweighs the negative emotions.

Or they may feel proud of their actions because they have supported those close 
to them against competition from those distant from them.  To suppose that 
empathy and reflection can eliminate all competition for limited resources 
strikes me as pollyannish.

Brent Meeker

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to