On Tue, Jan 19, 2010 at 1:49 PM, Brent Meeker <meeke...@dslextreme.com>wrote:

> silky wrote:
>
>> On Tue, Jan 19, 2010 at 10:30 AM, Stathis Papaioannou
>> <stath...@gmail.com> wrote:
>>
>>
>>> 2010/1/19 silky <michaelsli...@gmail.com>:
>>>
>>>
>>>> On Tue, Jan 19, 2010 at 1:24 AM, Stathis Papaioannou <
>>>> stath...@gmail.com> wrote:
>>>>
>>>>
>>>>> 2010/1/18 silky <michaelsli...@gmail.com>:
>>>>>
>>>>>
>>>>>> It would be my (naive) assumption, that this is arguably trivial to
>>>>>> do. We can design a program that has a desire to 'live', as desire to
>>>>>> find mates, and otherwise entertain itself. In this way, with some
>>>>>> other properties, we can easily model simply pets.
>>>>>>
>>>>>>
>>>>> Brent's reasons are valid,
>>>>>
>>>>>
>>>> Where it falls down for me is that the programmer should ever feel
>>>> guilt. I don't see how I could feel guilty for ending a program when I
>>>> know exactly how it will operate (what paths it will take), even if I
>>>> can't be completely sure of the specific decisions (due to some
>>>> randomisation or whatever) I don't see how I could ever think "No, you
>>>> can't harm X". But what I find very interesting, is that even if I
>>>> knew *exactly* how a cat operated, I could never kill one.
>>>>
>>>>
>>> That's not being rational then, is it?
>>>
>>>
>>
>> Exactly my point! I'm trying to discover why I wouldn't be so rational
>> there. Would you? Do you think that knowing all there is to know about
>> a cat is unpractical to the point of being impossible *forever*, or do
>> you believe that once we do know, we will simply "end" them freely,
>> when they get in our way? I think at some point we *will* know all
>> there is to know about them, and even then, we won't end them easily.
>> Why not? Is it the emotional projection that Brent suggests? Possibly.
>>
>>
>>
>>
>>> but I don't think making an artificial
>>>>> animal is as simple as you say.
>>>>>
>>>>>
>>>> So is it a complexity issue? That you only start to care about the
>>>> entity when it's significantly complex. But exactly how complex? Or is
>>>> it about the unknowningness; that the project is so large you only
>>>> work on a small part, and thus you don't fully know it's workings, and
>>>> then that is where the guilt comes in.
>>>>
>>>>
>>> Obviously intelligence and the ability to have feelings and desires
>>> has something to do with complexity. It would be easy enough to write
>>> a computer program that pleads with you to do something but you don't
>>> feel bad about disappointing it, because you know it lacks the full
>>> richness of human intelligence and consciousness.
>>>
>>>
>>
>> Indeed; so part of the question is: Qhat level of complexity
>> constitutes this? Is it simply any level that we don't understand? Or
>> is there a level that we *can* understand that still makes us feel
>> that way? I think it's more complicated than just any level we don't
>> understand (because clearly, I "understand" that if I twist your arm,
>> it will hurt you, and I know exactly why, but I don't do it).
>>
>>
>
> I don't think you know exactly why, unless you solved the problem of
>  connecting qualia (pain) to physics (afferent nerve transmission) - but I
> agree that you know it heuristically.
>
> For my $0.02 I think that not understanding is significant because it
> leaves a lacuna which we tend to fill by projecting ourselves.  When people
> didn't understand atmospheric physics they projected super-humans that
> produced the weather.  If you let some Afghan peasants interact with a
> fairly simple AI program, such as used in the Loebner competition, they
> might well conclude you had created an artificial person; even though it
> wouldn't fool anyone computer literate.
>
> But even for an AI that we could in principle understand, if it is complex
> enough and acts enough like an animal I think we would feel ethical concerns
> for it.  I think a more difficult case is an intelligence which is so alien
> to us we can't project our feelings on it's behavior.  Stanislaw Lem has
> written stories on this theme: "Solaris", "His Masters Voice", "Return from
> the Stars", "Fiasco".
>
There doesn't seem to be much recognition of this possibility on this list.
> There's generally an implicit assumption that we know what consciousness is,
> we have it, and that's the only possible kind of consciousness.  All OMs are
> human OMs.  I think that's one interesting thing about Bruno's theory; it is
> definite enough (if I understand it) that it could elucidate different kinds
> of consciousness.  For example, I think Searle's Chinese room is conscious -
> but in a different way than we are.
>

I'll have to look into these things, but I do agree with you in general; I
don't think ours is the only type of consciousness at all. Though I do think
the concept that not understanding completely is interesting, because it
suggests that a "god" should actually not particularly care what happens to
us, because to them it's all predictable. (And obviously, the idea of moral
obligations to computer programs is arguably interesting).




> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com<everything-list%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>


-- 
silky
 http://www.mirios.com.au/
 http://island.mirios.com.au/t/rigby+random+20

REGIMENTATION-passe ENCLOSURE. SPEECHLESSNESS.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Reply via email to