Re: [agi] Trouble implementing my AGI Algorithm

2007-05-05 Thread Mark Waser
they are not the quickest path to AGI.  I believe that having a design 
which has a number of well-thought-through restrictions with designed-in 
obsolescence (so that the AGI can easily become more generalized after a 
working foundation is built) is the most effective route to AGI.


And before I get hammered -- No, this is *NOT* equivalent to a belief that 
narrow AI will eventually grow to AGI.  Narrow AI applications all have far 
too many *required*, unremovable restrictions built into them that they will 
collapse without.  This is more equivalent to training wheels on a bicycle 
or a scaffolding that is used to construct a building.


- Original Message - 
From: Mark Waser [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 05, 2007 12:21 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm


I do not believe that the algorithm must be more complex. The more 
complex the algorithm, the more ad hoc it is. Complex algorithms are not 
able to perform generalized tasks. I believe the reason that n-digit was 
a failure was because there is no vision system, NOT because the 
algorithm is too simple.


   Fundamentally, there is always a trade-off of flexibility/freedom 
vs.complexity/control vs. speed.  The real question is what trade-off 
values will work and quickly allow you to get to a system where you can 
relax some of your initial restrictions.  My personal intuition/opinion 
(which I can't prove) is that many (if not the majority) of people on this 
list are trying for solutions that are *too* general.  I believe that 
these too general solutions can (probably) work eventually (given enough 
computing power) but they are not the quickest path to AGI.  I believe 
that having a design which has a number of well-thought-through 
restrictions with designed-in obsolescence (so that the AGI can easily 
become more generalized after a working foundation is built) is the most 
effective route to AGI.  Of course, I could also be seriously wrong and 
find it impossible to remove a restriction that then prevents AGI -- but 
that's the route I'm taking. :-)


I know that the database has to remember pain and pleasure for stimuli. 
But I have difficulty making a fuzzy database representation, even for 
some subfields.


   Fuzziness can mean different things to different people and the best 
forms of fuzziness are extremely hard to design and most often suffer 
*seriously* from the unconscious assumptions of the creator.  I'm afraid 
that you're going to have to give far more detail before we'll have a clue 
of what you're asking.



- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 6:08 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm


I do not believe that the algorithm must be more complex. The more complex 
the algorithm, the more ad hoc it is. Complex algorithms are not able to 
perform generalized tasks. I believe the reason that n-digit was a failure 
was because there is no vision system, NOT because the algorithm is too 
simple. Because the algorithm searches the database recursively, I believe 
that my simple algorithm can perform any computation (trained by operant 
conditioning). The failure for n-digit addition was because there are no 
eyes that can move to concentrate on each digit.


The database is remarkably similar to the human brain. It can learn 
easily by only remembering the difference between the external stimuli 
with a similar stimuli remembered in the database. Therefore, the 
algorithm compress the learned knowledge efficiently. Pattern recognition 
and abstract reasoning is also easy because of the incremental learning.



I am having trouble with the fuzzy database representation. So it's 
best to test the algorithm in a specific subfield (like n-digit addition) 
and then generalize it into real-world tasks.


In general, my algorithm behaves like the brain of an animal. Animals 
learn by operant conditioning and are also difficult to teach them 
multiple digit addition.


I believe that the environment must be fuzzy in order for the operant 
conditioning method to work.


I know that the database has to remember pain and pleasure for stimuli. 
But I have difficulty making a fuzzy database representation, even for 
some subfields.


- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 3, 2007 5:06:33 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm

Interesting e-mail.  I agree with most of your philosophy but believe 
that
the algorithm you are requesting is far, far more complex than you 
realize.


Is there any particular reason why you're remaining anonymous?

- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, May 03, 2007 4:57 PM
Subject: [agi] Trouble implementing my AGI Algorithm



Hello,

I have trouble implementing my AGI algorithm:

The below paragraphs might sound

Re: [agi] Trouble implementing my AGI Algorithm

2007-05-04 Thread Matt Mahoney
--- a [EMAIL PROTECTED] wrote:

 Help me with the algorithm. Thank you

Dear a for anonymous (are you related to Ben?),

Before you worry about whether an AGI should be friendly or selfish or
religious, first you have to solve some lower level problems in language,
vision, hearing, navigation, etc.  You might make some progress in each field
but eventually you will run into the problem that you can't fully solve any of
the problems without solving all of them.  For example, images and sound
contain writing and speech, so you need to solve language.  Then, in order to
communicate effectively with a machine, it must have a world model similar to
yours, and a lot of this knowledge comes from the other senses.

After you have done that, then the next problem is that you are not building a
human.  You are building a slave.  Its sole purpose is to be useful to humans.
 A human body is not necessarily the best form for serving this purpose.  You
might build a robot with 4 arms and wheels for legs and sonar instead of
vision.  Or it might not have a body at all, or maybe thousands of insect
sized robots controlled as one.  The problem is that this creature will have a
world model that is nothing like yours, and that will make communication
difficult.  With currently available computers we cope with this problem by
inventing new terminology or by using existing words in new ways.  For
example, we talk about an operating system process as running or sleeping even
though it has no legs and does not dream.  Then there are other mental states,
like running in privileged mode, that have no equivalent in humans.

In humans, selfishness and friendliness and religion are secondary goals to
our main goal, which like all species, is to propagate our DNA.  For example,
religion achieves this goal by making taboo any form of sex that does not
contribute to making children.  Therefore, it is inappropriate to program
religion into an AGI whose goal is not reproduction, but to serve humans.  In
your AGI design, you need to choose an appropriate set of emotions and mental
states, inventing new ones as needed.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Trouble implementing my AGI Algorithm

2007-05-04 Thread a
I do not believe that the algorithm must be more complex. The more complex the 
algorithm, the more ad hoc it is. Complex algorithms are not able to perform 
generalized tasks. I believe the reason that n-digit was a failure was because 
there is no vision system, NOT because the algorithm is too simple. Because the 
algorithm searches the database recursively, I believe that my simple algorithm 
can perform any computation (trained by operant conditioning). The failure for 
n-digit addition was because there are no eyes that can move to concentrate 
on each digit. 

The database is remarkably similar to the human brain. It can learn easily by 
only remembering the difference between the external stimuli with a similar 
stimuli remembered in the database. Therefore, the algorithm compress the 
learned knowledge efficiently. Pattern recognition and abstract reasoning is 
also easy because of the incremental learning. 


I am having trouble with the fuzzy database representation. So it's best to 
test the algorithm in a specific subfield (like n-digit addition) and then 
generalize it into real-world tasks. 

In general, my algorithm behaves like the brain of an animal. Animals learn by 
operant conditioning and are also difficult to teach them multiple digit 
addition.

I believe that the environment must be fuzzy in order for the operant 
conditioning method to work.

I know that the database has to remember pain and pleasure for stimuli. But I 
have difficulty making a fuzzy database representation, even for some 
subfields.

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 3, 2007 5:06:33 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm

Interesting e-mail.  I agree with most of your philosophy but believe that 
the algorithm you are requesting is far, far more complex than you realize.

Is there any particular reason why you're remaining anonymous?

- Original Message - 
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 03, 2007 4:57 PM
Subject: [agi] Trouble implementing my AGI Algorithm


 Hello,

 I have trouble implementing my AGI algorithm:

 The below paragraphs might sound ridiculous, because they are my original 
 ideas.

 We are all motivated by selfish thoughts. We help others so others can 
 help us back. We help others to cope
 with our pleasurable chemical addiction. We help others because 
 helpfulness is encoded in our genetic markup.

 We experience pain. Pain is to help us defend damage. When we touch 
 something hot we can draw back. But we
 have the free will to not react to it. I believe there is no free will.

 I will explain what I means. Assume that pain is a constraint. But this 
 constraint is not absolute. Other
 thoughts can override the constraint. For example, when you help some 
 animal being eaten from a monster, you
 can fight with the monster to save the

 animal's life. But you will experience pain in the fight. Therefore pain 
 is not a constraint. Your goal to save the animal's
 life overrides the pain constraint. (your goal to save the

 animal's life is also motivated by selfish actions) Therefore, pain is not 
 a constraint. But if there is no goal that overrides the pain constraint, 
 you will do anything to avoid the pain. We have proven there is no free 
 will--we choose to react or not react to pain is dependent on your goal or 
 our knowledge. Therefore, implementing pain as a constraint in friendly AI 
 will not help many lives. Our brains are doing things to get the highest 
 pleasure as possible. We get a chemical addiction to save that animal. 
 That pleasure is more pleasant than avoiding the pain by not fighting. We 
 trust ourselves. We can gamble pain for future pleasure. Therefore, I 
 believe that emotion can be implemented by an ordinary computer. Emotion 
 can be implemented by an algorithm that searches for the highest pleasure. 
 The algorithm must also has the ability to gamble pain for pleasure (by 
 applying goals or knowledge). There is no right or wrong. We kill 
 insects all the time. But we usually do
 not sympathize with them. This is because that our religion says that 
 bugs are not as important as other animals. It's
 a byproduct of natural selection. We have to hunt animals to survive.

 Without religion, we would brood over this question: Is it better to save 
 a human by sacrificing 1000 insects
 or vice versa?

 Therefore we assume that religion is natural. Religion helps us survive. 
 Some religions help us believe there
 is afterlife and reincarnation. Because we believe these, we do not fear 
 death. We are not afraid to
 sacrifice ourselves for others. For example, we will not be afraid to 
 participate in wars and spread our
 religion. Religion is a virus. Most of the world is religious because of 
 that.

 Therefore, some religions are dangerous. But religion is essential for our 
 daily survival. Some religious
 thoughts

[agi] Trouble implementing my AGI Algorithm

2007-05-03 Thread a
Hello,

I have trouble implementing my AGI algorithm:

The below paragraphs might sound ridiculous, because they are my original ideas.

We are all motivated by selfish thoughts. We help others so others can help us 
back. We help others to cope
with our pleasurable chemical addiction. We help others because helpfulness is 
encoded in our genetic markup.

We experience pain. Pain is to help us defend damage. When we touch something 
hot we can draw back. But we
have the free will to not react to it. I believe there is no free will.

I will explain what I means. Assume that pain is a constraint. But this 
constraint is not absolute. Other
thoughts can override the constraint. For example, when you help some animal 
being eaten from a monster, you
can fight with the monster to save the

animal's life. But you will experience pain in the fight. Therefore pain is not 
a constraint. Your goal to save the animal's
life overrides the pain constraint. (your goal to save the

animal's life is also motivated by selfish actions) Therefore, pain is not a 
constraint. But if there is no goal that overrides the pain constraint, you 
will do anything to avoid the pain. We have proven there is no free will--we 
choose to react or not react to pain is dependent on your goal or our 
knowledge. Therefore, implementing pain as a constraint in friendly AI will not 
help many lives. Our brains are doing things to get the highest pleasure as 
possible. We get a chemical addiction to save that animal. That pleasure is 
more pleasant than avoiding the pain by not fighting. We trust ourselves. We 
can gamble pain for future pleasure. Therefore, I believe that emotion can be 
implemented by an ordinary computer. Emotion can be implemented by an algorithm 
that searches for the highest pleasure. The algorithm must also has the ability 
to gamble pain for pleasure (by applying goals or knowledge). There is no 
right or wrong. We kill insects all the time. But we usually do
 not sympathize with them. This is because that our religion says that bugs 
are not as important as other animals. It's
a byproduct of natural selection. We have to hunt animals to survive.

Without religion, we would brood over this question: Is it better to save a 
human by sacrificing 1000 insects
or vice versa?

Therefore we assume that religion is natural. Religion helps us survive. Some 
religions help us believe there
is afterlife and reincarnation. Because we believe these, we do not fear death. 
We are not afraid to
sacrifice ourselves for others. For example, we will not be afraid to 
participate in wars and spread our
religion. Religion is a virus. Most of the world is religious because of that.

Therefore, some religions are dangerous. But religion is essential for our 
daily survival. Some religious
thoughts are encoded in our genes.

It's a process of natural selection. Kin selection and group selection are 
examples. Returning to the main question: Is selfishness essential for friendly 
AI? Selfish is related to laziness. Lazy people do not like to sacrifice hard 
work for pleasure (or they do not enjoy pleasure). They do not like to 
sacrifice their energy for pleasure. Contrastingly, AI can use as much energy 
as it wants. They do not get tired. Pain is using energy. But what about 
these feelings of people? Friendly AI will get pleasure if it sees the people 
happy. For example, many people are afraid of AI, even friendly AI. The 
friendly AI computer will self-destruct so these people will not worry about 
AI. The AI computer has to maintain at least a little superiority on oneself to 
prevent self-destruction. It's
a natural instinct.

But the last paragraph is contradictory. Will the computer self-destruct to get 
pleasure? We will guess:
selfish friendly AI might not. Unselfish friendly AI might (depends on 
knowledge and circumstances).

This is where religion takes over. If the selfish friendly AI believes in an 
afterlife, it might self-
destruct on some circumstances. The selfish friendly AI might experience 
pleasure during self- destruction.
The selfish friendly AI might otherwise (depending on religion) set a goal that 
it will experience pleasure
after it is self- destructed.

However, the friendly AI will be smart enough to figure out, for example, that 
there is no such thing as an
afterlife and religion. What do we do about it? What do we do when it figures 
out that all organisms are
equally superior?

Therefore, I believe that selfish AI might be less risky than unselfish AI. 
Unselfish AI might treat
everything equally; it might sacrifice humans to save animals.

To choose the safest route, we need an AI that behaves like a human. For 
example, if humans are motivated
by selfish goals, then friendly AI has to be motivated by selfish goals. We 
need an AI to be taught by a top-
down method rather than a bottom-up approach, like humans.

How do we make the selfish friendly AI algorithm? We have an obvious 
requirement: lots of


Re: [agi] Trouble implementing my AGI Algorithm

2007-05-03 Thread Mark Waser
Interesting e-mail.  I agree with most of your philosophy but believe that 
the algorithm you are requesting is far, far more complex than you realize.


Is there any particular reason why you're remaining anonymous?

- Original Message - 
From: a [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, May 03, 2007 4:57 PM
Subject: [agi] Trouble implementing my AGI Algorithm



Hello,

I have trouble implementing my AGI algorithm:

The below paragraphs might sound ridiculous, because they are my original 
ideas.


We are all motivated by selfish thoughts. We help others so others can 
help us back. We help others to cope
with our pleasurable chemical addiction. We help others because 
helpfulness is encoded in our genetic markup.


We experience pain. Pain is to help us defend damage. When we touch 
something hot we can draw back. But we

have the free will to not react to it. I believe there is no free will.

I will explain what I means. Assume that pain is a constraint. But this 
constraint is not absolute. Other
thoughts can override the constraint. For example, when you help some 
animal being eaten from a monster, you

can fight with the monster to save the

animal's life. But you will experience pain in the fight. Therefore pain 
is not a constraint. Your goal to save the animal's

life overrides the pain constraint. (your goal to save the

animal's life is also motivated by selfish actions) Therefore, pain is not 
a constraint. But if there is no goal that overrides the pain constraint, 
you will do anything to avoid the pain. We have proven there is no free 
will--we choose to react or not react to pain is dependent on your goal or 
our knowledge. Therefore, implementing pain as a constraint in friendly AI 
will not help many lives. Our brains are doing things to get the highest 
pleasure as possible. We get a chemical addiction to save that animal. 
That pleasure is more pleasant than avoiding the pain by not fighting. We 
trust ourselves. We can gamble pain for future pleasure. Therefore, I 
believe that emotion can be implemented by an ordinary computer. Emotion 
can be implemented by an algorithm that searches for the highest pleasure. 
The algorithm must also has the ability to gamble pain for pleasure (by 
applying goals or knowledge). There is no right or wrong. We kill 
insects all the time. But we usually do
not sympathize with them. This is because that our religion says that 
bugs are not as important as other animals. It's

a byproduct of natural selection. We have to hunt animals to survive.

Without religion, we would brood over this question: Is it better to save 
a human by sacrificing 1000 insects

or vice versa?

Therefore we assume that religion is natural. Religion helps us survive. 
Some religions help us believe there
is afterlife and reincarnation. Because we believe these, we do not fear 
death. We are not afraid to
sacrifice ourselves for others. For example, we will not be afraid to 
participate in wars and spread our
religion. Religion is a virus. Most of the world is religious because of 
that.


Therefore, some religions are dangerous. But religion is essential for our 
daily survival. Some religious

thoughts are encoded in our genes.

It's a process of natural selection. Kin selection and group selection are 
examples. Returning to the main question: Is selfishness essential for 
friendly AI? Selfish is related to laziness. Lazy people do not like to 
sacrifice hard work for pleasure (or they do not enjoy pleasure). They do 
not like to sacrifice their energy for pleasure. Contrastingly, AI can use 
as much energy as it wants. They do not get tired. Pain is using energy. 
But what about these feelings of people? Friendly AI will get pleasure if 
it sees the people happy. For example, many people are afraid of AI, even 
friendly AI. The friendly AI computer will self-destruct so these people 
will not worry about AI. The AI computer has to maintain at least a little 
superiority on oneself to prevent self-destruction. It's

a natural instinct.

But the last paragraph is contradictory. Will the computer self-destruct 
to get pleasure? We will guess:
selfish friendly AI might not. Unselfish friendly AI might (depends on 
knowledge and circumstances).


This is where religion takes over. If the selfish friendly AI believes in 
an afterlife, it might self-
destruct on some circumstances. The selfish friendly AI might experience 
pleasure during self- destruction.
The selfish friendly AI might otherwise (depending on religion) set a goal 
that it will experience pleasure

after it is self- destructed.

However, the friendly AI will be smart enough to figure out, for example, 
that there is no such thing as an
afterlife and religion. What do we do about it? What do we do when it 
figures out that all organisms are

equally superior?

Therefore, I believe that selfish AI might be less risky than unselfish 
AI. Unselfish AI might treat

everything equally; it might

Re: [agi] Trouble implementing my AGI Algorithm

2007-05-03 Thread Vladimir Nesov
Hello everyone. I'm completely new to this field, on idea debugging
stage still. Level of expressibility is about that
of previous orator, so I'll hold back elaborate picture for now :).

Will try to participate in discussion from time to time to contribute
biases of my approach.


Friday, May 4, 2007, 12:57:54 AM, a wrote:

a Hello,

a I have trouble implementing my AGI algorithm:

I think main problem is world model, even 'static' one. Behaviour is
something to be derived from that model (even if request for behaviour
selection is the main parameter defining model construction). All
those benefits/actions still need to be assigned to objects being in
certain states.

Religion thing you reffer to is just heuristic not grounded to
underlying principles. It's inevitable in complex system description, where you
have to operate on abstract levels.

Implementing formal procedures (as part of system's knowledge) seems
useless. When you model something through formal description, there's
always semantic component external to that formal description, which
defines its design. Otherwise you described it completely, which isn't
an interesting case. So system can't use formalisms unless it already
understands things it'll apply them to.

Problem with knowledge is to make knowledge base and virtual scenes converge on
consistency, which isn't covered by blind search for goal.


-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Trouble implementing my AGI Algorithm

2007-05-03 Thread Jiri Jelinek

Make sure you don't spend too much time pondering about x,y,z before
solving a,b,c. The x,y,z may later look differently to you. Work out the
knowledge representation first.

Regards,
Jiri Jelinek

On 5/3/07, a [EMAIL PROTECTED] wrote:


Hello,

I have trouble implementing my AGI algorithm:

The below paragraphs might sound ridiculous, because they are my original
ideas.

We are all motivated by selfish thoughts. We help others so others can
help us back. We help others to cope
with our pleasurable chemical addiction. We help others because
helpfulness is encoded in our genetic markup.

We experience pain. Pain is to help us defend damage. When we touch
something hot we can draw back. But we
have the free will to not react to it. I believe there is no free will.

I will explain what I means. Assume that pain is a constraint. But this
constraint is not absolute. Other
thoughts can override the constraint. For example, when you help some
animal being eaten from a monster, you
can fight with the monster to save the

animal's life. But you will experience pain in the fight. Therefore pain
is not a constraint. Your goal to save the animal's
life overrides the pain constraint. (your goal to save the

animal's life is also motivated by selfish actions) Therefore, pain is not
a constraint. But if there is no goal that overrides the pain constraint,
you will do anything to avoid the pain. We have proven there is no free
will--we choose to react or not react to pain is dependent on your goal or
our knowledge. Therefore, implementing pain as a constraint in friendly AI
will not help many lives. Our brains are doing things to get the highest
pleasure as possible. We get a chemical addiction to save that animal. That
pleasure is more pleasant than avoiding the pain by not fighting. We trust
ourselves. We can gamble pain for future pleasure. Therefore, I believe that
emotion can be implemented by an ordinary computer. Emotion can be
implemented by an algorithm that searches for the highest pleasure. The
algorithm must also has the ability to gamble pain for pleasure (by applying
goals or knowledge). There is no right or wrong. We kill insects all the
time. But we usually do
not sympathize with them. This is because that our religion says that
bugs are not as important as other animals. It's
a byproduct of natural selection. We have to hunt animals to survive.

Without religion, we would brood over this question: Is it better to save
a human by sacrificing 1000 insects
or vice versa?

Therefore we assume that religion is natural. Religion helps us survive.
Some religions help us believe there
is afterlife and reincarnation. Because we believe these, we do not fear
death. We are not afraid to
sacrifice ourselves for others. For example, we will not be afraid to
participate in wars and spread our
religion. Religion is a virus. Most of the world is religious because of
that.

Therefore, some religions are dangerous. But religion is essential for our
daily survival. Some religious
thoughts are encoded in our genes.

It's a process of natural selection. Kin selection and group selection are
examples. Returning to the main question: Is selfishness essential for
friendly AI? Selfish is related to laziness. Lazy people do not like to
sacrifice hard work for pleasure (or they do not enjoy pleasure). They do
not like to sacrifice their energy for pleasure. Contrastingly, AI can use
as much energy as it wants. They do not get tired. Pain is using energy.
But what about these feelings of people? Friendly AI will get pleasure if it
sees the people happy. For example, many people are afraid of AI, even
friendly AI. The friendly AI computer will self-destruct so these people
will not worry about AI. The AI computer has to maintain at least a little
superiority on oneself to prevent self-destruction. It's
a natural instinct.

But the last paragraph is contradictory. Will the computer self-destruct
to get pleasure? We will guess:
selfish friendly AI might not. Unselfish friendly AI might (depends on
knowledge and circumstances).

This is where religion takes over. If the selfish friendly AI believes in
an afterlife, it might self-
destruct on some circumstances. The selfish friendly AI might experience
pleasure during self- destruction.
The selfish friendly AI might otherwise (depending on religion) set a goal
that it will experience pleasure
after it is self- destructed.

However, the friendly AI will be smart enough to figure out, for example,
that there is no such thing as an
afterlife and religion. What do we do about it? What do we do when it
figures out that all organisms are
equally superior?

Therefore, I believe that selfish AI might be less risky than unselfish
AI. Unselfish AI might treat
everything equally; it might sacrifice humans to save animals.

To choose the safest route, we need an AI that behaves like a human. For
example, if humans are motivated
by selfish goals, then friendly AI has to be motivated by selfish