On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote:
Thanks Ben.
As foundation of my AI friendliness theory I tried to figure out why we
believe what good or bad is and came to the conclusion that humans, animals
and even plants have evolved to perceive as good what is encoded into their
This is from a leading political columnist with no apparent link to
transhumanism
http://www.nytimes.com/2007/10/26/opinion/26brooks.html
It's funny that he has reached conclusions like I have melded my mind with
the heavens, communed with the universal consciousness with respect to his
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your
human
lifetime. Would it make much difference if it was erased
Interesting how people see in the same data what they are predisposed to. I
ride the tongue-in-cheek mood of the essay till his last three paragraphs and
then focus in upon the ironic conclusions, like where he equates narrowness to
individualism, and subtly bemoans the attack on autonomy.
--- Richard Loosemore [EMAIL PROTECTED] wrote:
My assumption is friendly AI under the CEV model. Currently, FAI is
unsolved.
CEV only defines the problem of friendliness, not a solution. As I
understand it, CEV defines AI as friendly if on average it gives humans
what
they want in the
On 10/28/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On 10/27/07, Stefan Pernar [EMAIL PROTECTED] wrote:
Thanks Ben.
As foundation of my AI friendliness theory I tried to figure out why we
believe what good or bad is and came to the conclusion that humans, animals
and even plants
I'd be interested in reading a thoughtful point of view on the topics
below, written for someone with a decent grasp of philosophy and science
(like myself) but without subject matter expertise. I'd be less
interested in an anthology of what others say and more interested in
reading the