Robert has some good, sound, fundamental points here.

Rand seems to not understand that a psychopath in respect to humanity
has no value, that it is a mutation that occurs, but in a social
community it has no value, in the sense that human values have to be
both selfish and altruistic to have 'value'. Value is a kind of
focusing that goes on and that focusing will change form slightly as a
person is confronted with issues but it will be controlled by the
previous success of experience in value. There is a way of recognizing
when in focus.
A machine is design for a mission. It can be applied maybe for a
number of missions.
But these are assigned by humans. Software is a kind of machine as
well. Which might be applied to any number of missions. It could be
designed to do completely seemly unrelated jobs.

Being born is most selfish for biological entities. Taking and giving
without annihilation becomes the currency of human values. Finally our
death is totally altruistic so we must be selfish about our values,
because they have value. We have survived so far some terrible
history. Birth to death. Selfish to altruistic. There is objectivity
but it does not exist for us as totally proper because it can mean we
do not exist. We know it is there, we see it out there in the night
sky.

Our machines if they are to take a place next to us or to be put away
as all our appliances. Are they an appliance? Or are they us? Slavery
has made men appliances. We have institutions today that still demand
us to behave as appliances. Appliances being more altruistic and less
selfish.
Certainly if you relinquish selfishness completely you will also
release your values and your value. Selfish design must be part of AI,
just as in software we see that a computer virus is possible. It can
be trouble and is looked upon as something that takes. But because it
exists, it allows some to give in response. The struggle for focus,
that is to survive with some dignity maybe goes on in everything we
do. Why not apply this to AI. Human consciousness is based on our
selfish needs which are part and parcel to sustaining our existence
and biological entities.

What would stop a run away AI who would exist on a nanoscale from not
using matter itself as it exists around us and the miniscule
collisions of energy release on those levels as a source of energy?
Maybe. It would not have to manifest itself at all to us. There would
be no territory or property it could not inhabit because it could
become in everything including our own bodies and minds.

Boundaries of some kind. Putting the intelligence in a grain of sand.
Means boundaries by its description. There are limits. What is causing
the limits in this format? Values?

Objectivism is a just another mask applied for psychopathic
expression. It has no value in humanity. There is no loss in
understanding or comprehension by including value into our
explorations. It is natural for us to do so. It is who we are. Science
will not suffer from the occurrence of value.

What I have seen of some current philosophers is that they are not
doing their job. They are bought and paid for, so to speak. They are
supposed to be giving direction to science, which they have not done
because of the current 'awe' of it all? Or for money?

Men are subjective from the start and until death. We cannot be
objective without dying.

We have been slaves historically and still are. Could it be the AI is
our desire to be free?
Free of our parasitic institutions, free of our biology?

I think it could be. I think that it is possible that there is in us
an old hate for this inherited condition of slavery. "More work, more
production". The drums pound. To what end?

Look at galaxies with their black hole centers sucking in like drains
in whirling tubs of stars. Do you see more work there? More
production? Who's in control?

Could it be that a black hole is somebody's AI that got away? Where
selfishness and altruism inhabit the same space, so no value. We need
value in our AI. On an off switches. All our vehicles have breaks, why
not AI. AI without breaks could have real consequences that, real
people could suffer for. It goes back to because you can does not mean
you should. I suspect that AI will police itself, because there are
those who will code for human survival.

I wonder if some of this hype is really to sustain attention? Without
it money does not flow. Could it be that too? Could be.


Pat McKown

On 7/26/07, Robert Wensman <[EMAIL PROTECTED]> wrote:
>  What worries me is that the founder of this company subscribes to the
> philosophy of Objectivism, and the implications this might have for the
> company's possibility at achieving friendly AI. I do not know about the rest
> of their team, but some of them use the word "rational" a lot, which could
> be a hint.
>
>
>
> I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
> non-standard meaning when using words like "selfishness" and "altruism", but
> her main point is that altruism is the source of all evil in the world, and
> selfishness ought to be the main virtue of all mankind. Instead of altruism
> she often also uses the word "selflessness" which better explains her
> seemingly odd position. What she essentially means is that all evil of the
> world stems from people who "give up their values, and their self" and
> thereby become mindless evildoers that respect others as little as they
> respect themselves. While this psychological statement in isolation could be
> worth noting, and might help understand some collective madness, especially
> from the last century, I still feel her philosophy is dangerous because she
> mixes up her very specific concept of "selflessness" with the
> commonly understood concept of altruism, in the sense of valuing the well
> being and happiness of others. Is this mix-up accidental or intended? In her
> novel The Fountainhead you even get the impression that she doesn't think it
> is possible to combine altruism with creativity and originality, as all
> "altruistic" characters of her book are incompetent copycats who just
> imitate others.
>
>
>
> Her view of the world also seems to completely ignore another category of
> potential evil-doers: Selfish people who just do not see any problem with
> using whatever means they see fit, including violence, to achieve their
> goals. People who just do not see there is "any problem" in killing or
> torturing others. Why does she ignore this group of people, because she does
> not think they exist?
>
>
>
> My personal opinion is that Objectivism is a case of what could be called
> "the werewolf fallacy". For example, I could make a case for the following
> philosophy: "Werewolves as described in literature would be bad for
> humanity, and if we encounter werewolves, we should try to fight them with
> whatever means we see fit!". This statement is in itself completely true and
> coherent, and I would be possible to write books on the subject that could
> seem to make sense. The only problem is of course that there are no
> werewolves, and there are other much more important things to do than to go
> around preparing to fight werewolves! Similarly I do not think that all
> these "selfless people" who Ayn Rand describe exist in any large numbers, or
> at least they are certainly not the main source of evil in the world.
>
>
>
> How Objectivism could feel like "home" I cannot understand personally. If a
> person is less capable of understanding other people, I guess it could make
> some sense. I guess social life could be hard for such a person; they would
> often hurt other people by mistake, make others annoyed or angry and
> frequently bring enemies upon themselves. Ayn Rands gives to them a very
> comfortable answer namely that it is ok, even virtuous, to not understand
> others as long as you are not physically aggressive. An agenda for peaceful
> psychopathy if you like. So far so good, I don't expect everyone to be
> empathetic, and to motivate the need for respect rationally by the benefits
> of cooperation seems like a reasonable trade of. But Ayn Rand goes a step
> too far when she outright attacks altruism and people who value the well
> being of others! She definitely crosses a line there!
>
>
>
> As a general intelligent theoretician I would also say Ayn Rands notion of
> "selflessness" is outright bizarre if interpreted literally. An intelligent
> being cannot choose to "give up its values", since all its choices are
> already based upon them. Her conclusions are therefore confusing.
>
>
>
> So because this philosophy is controversial, it raises some interesting
> questions about Adaptive AI's plans for friendly AI. *What values
> an objectivist would give to an AGI seems like a complete paradox to me?*
> Would
> he make an AGI that is only obedient to its master and creator, or would he
> make an AGI system that to only cares about protecting and sustaining the
> life of itself? But in the first case, the AGI would truly become a
> selfless, and therefore evil soul in Ayn Rands very meaning, an evil soul
> that is also super intelligent.
>
>
>
> On the other hand I cannot understand what selfish interest the objectivist
> AGI designer could find in creating a selfish super intelligent AGI system
> that would likely become a superior competitor? Maybe such an AGI system
> would decide, much like the fictionous Skynet, that the humans is the most
> imminent threat to its survival, and make us its enemy?
>
>
>
> I bet a strong enough AGI system could kill us even without the use
> of offensive violence in the sense Ayn Rand uses the word. I guess it just
> needs to obtain exclusive legal ownership on all the land that we need to
> live on, on all the food we need to eat, and on all the air we need to
> breathe. Then it could just kill us in self-defence because we trespass on
> its property. I know even Ayn Rand sees no moral problem in using defensive
> violence to defend material property that is being stolen.
>
>
>
> Well, let me just say that I would be concerned if someone creates a selfish
> super intelligent AGI system that does not value the well being of me and
> the rest of us humans, except for when it can see benefits for its own
> survival. Out of fear for my own life, and the life of my descendants, I
> would not support your AGI initiative! Even a sentimental and altruistic
> person like me has that much sense of self-defence! :-)* *
>
>
>
> That said, I think Adaptive AI's definition of general intelligence seems
> pretty reasonable, and their plans for development seems well thought out. I
> also found some thoughts on evolution and AGI noteworthy. But my feelings
> are mixed about their strength in numbers and the hopes for progress it
> gives. To me altruistic AGI just seems a lot safer than selfish AGI!
>
>
>
> /Robert Wensman
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=28656017-28bbd1

Reply via email to