David Nyman writes:

> > They're not just simulating us, are they? They might have just slapped
> > together a virtual universe in an idle moment to see how it turns out. Maybe
> > they're more interested in star formation, or bacteria or something. Is an 
> > E. coli
> > in your gut justified in thinking God made the universe, including human 
> > guts,
> > just for its benefit?
> Stathis
> I see what you mean, of course. However, it's not really what I was
> trying to elicit by my original post. If I were to try to justify my
> actions to you in the sort of way you describe above, I don't think
> you'd be very accepting of this, nor would much of the rest of society.
> I don't mean to say that there isn't a great deal of hypocrisy and
> deviation from ethical conduct in the real world, but unless one is
> prepared to discard the project of working together to make things
> better rather than worse, I believe that we should take ethical
> dialogue seriously. My sense is that much more advanced civilisations
> would have developed in this area too, not just technologically - for
> one thing, they have presumably found ways to live in harmony and not
> self-destruct.  So at the least these issues would have meaning for
> them.
> That's why I feel that your dismissal of the issues isn't very
> illuminating. BTW, I don't intend this as a complaint, I'm just
> clarifying what I had in mind in my original questions - that it would
> be interesting to explore the ethical dimensions of possible simulaters
> and their simulations. I think you're saying that we can't know and
> shouldn't care, which I don't find very interesting.
> As a challenge to your view, might I suggest that in your example re
> the E. coli - if we knew that the E. coli was conscious and had
> feelings, we might be more concerned about it. Do you think it's a
> reasonable assumption that technologists capable enough to include us
> in their simulation, regardless of their 'ultimate purpose', would a)
> not know we had consciousness and feelings, or b) not care, and if so,
> on what justification? Or is this simply unfathomable? I'm not asking
> rhetorically, I'm really interested.
> David

I am proposing, entirely seriously, that it would only be by incredible luck 
that entities vastly superior to us, or even just vastly different, would 
be able to empathise with us on any level, or share our ethical standards 
or any of our other cultural attributes. Do you *really* think that if we 
somehow discovered E. coli had a rudimentary consciousness we would 
behave any differently towards them? Or even if we discovered that each 
E. coli contained an entire universe full of zillions of intelligent beings, 
so that we commit genocide on a scale beyond comprehension every time 
we boil water? I think we would all just say "too bad, we have to look after 

Sentience and intelligence are *not* synonymous with what we think of as 
ethical behaviour. There is no logical contradiction in an intelligent being 
for example, does not feel physical or psychological pain, or who does but 
devotes his life to wiping out every other sentient being which might compete 
with him. Perhaps we could expect that if we were in a simulation our 
creators might empathise with us if we had been deliberately made in their 
image, but there is no evidence that this is so, any more than there is 
evidence for an omnipotent, personal God. The universe was set in motion 
with the fundamental laws of physics (or whatever), and humans are just a 
tiny and insignificant part of the result, here and gone in an eyeblink. There 
isn't even any evidence that human-level intelligence is an especially useful 
evolutionary strategy.

Stathis Papaioannou
Be one of the first to try Windows Live Mail.
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to