> > Rational self-interest does not stop us from knocking down forests to > build > > cities, in spite of all the ants and squirrels that are > rendered homeless > or > > dead as a consequence. > > > My point being that maybe it should. Our destruction of the > environment can > be seen as not just ethically suspect, but as an irrational > destruction of a > living system that we depend on for far more than manufacturing raw > materials.
This is a subtle issue, of course.... It's environmentally-destructive modern culture that has brought us to the point where we can send e-mails and build AGI's. But yet, surely, it should have been possible to get us this point technologically without wreaking so much destruction. At least my (optimistic) intuition says so. > Pehaps what I am trying to say is that what AGI needs to aim at is not so > much an independent entity, but a symbiotic partnership between carbon and > silicon intelligences. This gets around the problem of just how > you finess > the process of "internalizing" vs. "hard-wiring" benevolence in our future > AGI masters. This is a very deep point, which gets at the "global brain" idea as discussed extensively in my book "Creating Internet Intelligence" and in other works cited there... However, it brings up serious issues of control. If we are incorporated in some sense into a superintelligent superorganism, how much freedom do we lose? Or do we not lose any freedom at all -- in the same sense that a liver cell is just as "free" as an amoeba ??? These issues were extensively discussed on the global brain discussion group e-mail list, a few years back. -- Ben Goertzel ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
