Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Mike Dougherty

On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote:


Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle consciously the
bits you
need to solve a particularly tough problem? No.



I can close my eyes in order to visualize a geometric association or spatial
relationship...

When I fall asleep and dream about a solution to a problem that I am working
on, there are 'alternate' cognitive processes being performed.

I know... I'm just playing devil's advocate.  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
Why must you argue with everything I say?  Is this not a sensible 
statement?


I don't argue with everything you say.  I only argue with things that I 
believe are wrong.  And no, the statements You cannot turn off hunger or 
pain.  You cannot control your emotions are *NOT* sensible at all.



You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.


Funny, I always thought that it was the animals that continued eating while 
being stalked were the ones that were removed from the gene pool (suddenly 
and bloodily).  Yes, you eventually have to feed yourself or you die and 
animals mal-adapted enough to not feed themselves will no longer contribute 
to the gene pool, but can you disprove the equally likely contention that 
animals eat because it is very pleasurable to them and that they never feel 
hunger (or do you only have sex because it hurts when you don't)?



Is this not a sensible way to program the top level goals for an AGI?


No.  It's a terrible way to program the top level goals for an AGI.  It 
leads to wireheading, short-circuiting of true goals for faking out the 
evaluation criteria, and all sorts of other problems.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]





--- Mark Waser [EMAIL PROTECTED] wrote:


 You cannot turn off hunger or pain.  You cannot
 control your emotions.

Huh?  Matt, can you really not ignore hunger or pain?  Are you really 
100%

at the mercy of your emotions?


Why must you argue with everything I say?  Is this not a sensible 
statement?



 Since the synaptic weights cannot be altered by
 training (classical or operant conditioning)

Who says that synaptic weights cannot be altered?  And there's endless
irrefutable evidence that the sum of synaptic weights is certainly
constantly altering by the directed die-off of neurons.


But not by training.  You don't decide to be hungry or not, because 
animals

that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans.  You argued that he 
could

have understood it if he tried harder.


   No, I gave five separate alternatives most of which put the blame on the 
system for not being able to compress it's data pattern into knowledge and 
explain it to Philip.  As I keep saying (and am trying to better rephrase 
here), the problem with statistical and similar systems is that they 
generally don't pick out and isolate salient features (unless you are lucky 
enough to have constrained them to exactly the correct number of variables). 
Since they don't pick out and isolate features, they are not able to build 
upon what they do.



I disagreed and argued that an
explanation would be useless even if it could be understood.


   In your explanation, however, you basically *did* explain exactly what 
the system did.  Clearly, the intrusion detection system looks at a number 
of variables and if the weighted sum exceeds a threshold, it decides that it 
is likely an intruder.  The only real question is the degree of entanglement 
of the variables in the real world.  It is *possible*, though I would argue 
extremely unlikely, that the variables really are entangled enough in the 
real world that a human being couldn't be trained to do intrusion detection. 
It is much, much, *MUCH* more probable that the system has improperly 
entangled the variables because it has too many degrees of freedom.


If you use a computer to add up a billion numbers, do you check the math, 
or

do you trust it to give you the right answer?


I trust it to give me the right answer because I know and understand exactly 
what it is doing.


My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.


The problems are that 1) correct learning algorithms will give bad results 
if given bad data *and* 2) how are you ensuring that your learning 
algorithms are correct under all of the circumstances that you're using 
them?



I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such 
as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.  The evidence supporting my assertion is:
1. The relative success of statistical models vs. structured knowledge.


Statistical models are successful at pattern-matching and recognition.  I am 
not aware of *anything* else that they are successful at.  I am fully aware 
of Jeff Hawkins' contention that pattern-matching is the only thing that the 
brain does but I would argue that that pattern-matching includes features 
extraction and knowledge compression, that current statistical AI models do 
not, and that that is why current statistical models are anything but AI.


Straight statistical models like you are touting are never going to get you 
to AI until you can successfully build them on top of each other -- and to 
do that, you need feature extraction and thus explainability.  An AGI is 
certainly going to use statistics for feature extraction, etc. but knowledge 
is *NOT* going to be kept in raw, badly entangled statistical form (i.e. 
basically compressed data rather than knowledge).  If you were to add 
functionality to a statistical system such that it could extract features 
and use that to explain it's results, then I would say that it is on the way 
to AGI.  The point is that your statistical systems can't correctly explain 
their results even to an unlimited being (because most of the time they are 
incorrectly entangled anyways).



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 11:11 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



Mark,

Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans.  You argued that he 
could

have understood it if he tried harder.  I disagreed and argued that an
explanation would be useless even if it could be understood.

If you use a computer to add up a billion numbers, do you check the math, 
or

do you trust it to give you the right answer?

My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.  I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such 
as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.  The evidence supporting my assertion is:

1. The relative success of statistical models vs. structured knowledge.
2. Arguments based on algorithmic complexity.  (The brain cannot model a 
more

complex machine).
3. The two examples above.


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:

 Philip Goetz gave an example of an intrusion detection system that learned
 information that was not comprehensible to humans.  You argued that he
 could
 have understood it if he tried harder.

No, I gave five separate alternatives most of which put the blame on the
system for not being able to compress it's data pattern into knowledge and
explain it to Philip.


But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!

Consider the case of mathematical proof.  Given a tricky theorem to
prove, I can show students the correct approach.  But my knowledge of
**why** I take the strategy I do, is a lot tougher to communicate.
Most of advanced math education is about learning by example -- you
show the student a bunch of proofs and hope they pick up the spirit of
how to prove stuff in various domains.  Explicitly articulating and
explaining knowledge about how to prove is hard...

The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it

OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...

So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.

However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Ben,

   I agree with the vast majority of what I believe that you mean but . . .


1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...


However, if the system could say to the human, I've got hundred thousand 
separate cases from which I've extracted six hundred twenty two variables 
which each increase the probability of x by half a percent to one percent 
individually and several of them are positively entangled and only two are 
negatively entangled (and I can even explain the increase in probability in 
64% of the cases via my login subroutines) . . . . wouldn't it be pretty 
easy for the human to debug anything with the system's assistance?  The fact 
that humans are slow and eventually capacity-limited has no bearing on my 
argument that a true AGI is going to have to be able to explain itself (if 
only to itself).


The only real case where a human couldn't understand the machine's reasoning 
in a case like this is where there are so many entangled variables that the 
human can't hold them in comprehension -- and I'll continue my contention 
that this case is rare enough that it isn't going to be a problem for 
creating an AGI.


My only concern with systems of this type is where the weak conclusions are 
unlabeled and unlabelable and thus may be a result of incorrectly 
over-fitting questionable data and creating too many variables and degrees 
and freedom and thus not correctly serving to predict new cases . . . . 
(i.e. the cases where the system's explanation is wrong).



2) IMO the dichotomy between logic based and statistical AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)


I think that I know what you mean but I would phrase this *very* 
differently.  I would phrase it that an AGI is going to have to be able to 
perform both logic-based and statistical operations and that any AGI which 
is limited to one of the two is doomed to failure.  If you can contort 
statistics to effectively do logic or logic to effectively do statistics, 
then you're fine -- but I really don't see it happening.  I also am becoming 
more and more aware of how much feature extraction and isolation is critical 
to my view of AGI.





- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 11:30 PM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis



Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.


Agreed...


I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, 
such as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.


I have a few points in response to this:

1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...

2) IMO the dichotomy between logic based and statistical AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)

For example, show me how a statistical procedure learning system is
going to learn how to carry out complex procedures involving
recursion.  Sure, it can be done -- but it's going to involve
introducing structures/dynamics that are accurately describable as
versions/manifestations of logic.

Or, show me how a logic based system is going to handle large masses
of uncertain data, as comes in from perception.  It can be done in
many ways -- but all of them involve introducing structures/dynamics
that are accurately describable as statistical.

Probabilistic inference in Novamente includes

-- higher-order inference that works somewhat like standard term and
predicate logic
-- first-order 

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

Hi,


The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem for
creating an AGI.


Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I think that I know what you mean but I would phrase this *very*
differently.  I would phrase it that an AGI is going to have to be able to
perform both logic-based and statistical operations and that any AGI which
is limited to one of the two is doomed to failure.  If you can contort
statistics to effectively do logic or logic to effectively do statistics,
then you're fine -- but I really don't see it happening.


My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?  If the
inference step is a just a Bayes rule step, then arguably it's just
statistics.  If the inference step is a variable unification step,
then arguably it's logic, with a little guidance from statistics on
the inference control side.  Partitioning cognition up into logic
versus statistics is not IMO very useful.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] AGI meeting in Austin on Sunday Dec 10th?

2006-12-04 Thread Peter Voss
I'll be in Austin next Sunday. 

If anyone there would like to meet to talk about AGI (and other things
extropian), please contact me privately at [EMAIL PROTECTED] 

Peter Voss

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Hank Conn

Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.

On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote:


I think this is an interesting, important, and very incomplete subject
area, so
thanks for posting this. Some thoughts below.

J. Storrs Hall, PhD. wrote:

 Runaway recursive self-improvement


 Moore's Law, underneath, is driven by humans.  Replace human
 intelligence with superhuman intelligence, and the speed of computer
 improvement will change as well.  Thinking Moore's Law will remain
 constant even after AIs are introduced to design new chips is like
 saying that the growth of tool complexity will remain constant even
 after Homo sapiens displaces older homonid species.  Not so.  We are
 playing with fundamentally different stuff.

 I don't think so. The singulatarians tend to have this mental model of a
 superintelligence that is essentially an analogy of the difference
between an
 animal and a human. My model is different. I think there's a level of
 universality, like a Turing machine for computation. The huge difference
 between us and animals is that we're universal and they're not, like the
 difference between an 8080 and an abacus. superhuman intelligence will
be
 faster but not fundamentally different (in a sense), like the difference
 between an 8080 and an Opteron.

 That said, certainly Moore's law will speed up given fast AI. But having
one
 human-equivalent AI is not going to make any more different than having
one
 more engineer. Having a thousand-times-human AI won't get you more than
 having 1000 engineers. Only when you can substantially augment the total
 brainpower working on the problem will you begin to see significant
effects.

Putting aside the speed differential which you accept, but dismiss as
important
for RSI, isn't there a bigger issue you're skipping regarding the other
differences between an Opteron-level PC and an 8080-era box? For example,
there
are large differences in the addressable memory amounts. This might for
instance
mean whereas a very good example of a human can study and become a true
expert
in perhaps a handful of fields, a SI may be able to be a true expert in
many
more fields simultaneously and to a more exhaustive degree than a human.
Will
this lead to the SI making more breakthroughs per given amount of runtime?
Does
it multiply with the speed differential?

Also, what is really the difference between an Einstein/Feynman brain, and
someone with an 80 IQ? It doesn't appear that E/F's brains run simply
slightly
faster, or likewise that they simply know more facts. There's something
else
isn't there? Call it a slightly better architecture or maybe only certain
brain
parts are a bit better, but this would seem to be a 4th issue to consider
besides the previously raised points of speed, memory capacity, and
universality. I'm sure we can come up with other things too.

(Btw, the preferred spelling is singularitarian; it gets most google
hits by
far from what I can tell. Also btw the term arguably now refers more
specifically to someone who wants to work on accelerating the singularity,
so
you probably can't group in here every single person who simply believes a
singularity is possible or coming.)


 If modest differences in size, brain structure, and
 self-reprogrammability make the difference between chimps and humans
 capable of advanced technological activity, then fundamental
 differences in these qualities between humans and AIs will lead to a
 much larger gulf, right away.

 Actually Neanderthals had brains bigger than ours by 10%, and we blew
them off
 the face of the earth. They had virtually no innovation in 100,000
years; we
 went from paleolithic to nanotech in 30,000. I'll bet we were universal
and
 they weren't.

 Virtually every advantage in Elie's list is wrong. The key is to
realize
 that that we do all these things, just more slowly than we imagine
machines
 being able to do them:

 Our source code is not reprogrammable.

 We are extremely programmable. The vast majority of skills we use
day-to-day
 are learned. If you watched me tie a sheepshank knot a few times, you
would
 most likely then be able to tie one yourself.

 Note by the way that having to recompile new knowledge is a big
security
 advantage for the human architecture, as compared with downloading
blackbox
 code and running it sight unseen...

This is missing the point entirely isn't it? Learning skills is using your
existing physical brain design, but not modifying its overall or even
localized
architecture or modifying what makes it work. When source code is
mentioned,
we're talking a lower level down.

Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle 

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


We're reaching the point of agreeing to disagree except . . . .

Are you really saying that nearly all of your decisions can't be explained 
(by you)?



My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?


It's logical operations whose choice points are controlled by statistical 
operations.:-)  Whether the operations can be shoved into the categories 
depends upon how far you break them down.


And I think that our point is the same, that both logic and statistics (or 
elements from each) are required.:-)


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 11:21 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis



Hi,

The only real case where a human couldn't understand the machine's 
reasoning
in a case like this is where there are so many entangled variables that 
the

human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem for
creating an AGI.


Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I think that I know what you mean but I would phrase this *very*
differently.  I would phrase it that an AGI is going to have to be able 
to
perform both logic-based and statistical operations and that any AGI 
which

is limited to one of the two is doomed to failure.  If you can contort
statistics to effectively do logic or logic to effectively do statistics,
then you're fine -- but I really don't see it happening.


My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?  If the
inference step is a just a Bayes rule step, then arguably it's just
statistics.  If the inference step is a variable unification step,
then arguably it's logic, with a little guidance from statistics on
the inference control side.  Partitioning cognition up into logic
versus statistics is not IMO very useful.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

We're reaching the point of agreeing to disagree except . . . .

Are you really saying that nearly all of your decisions can't be explained
(by you)?


Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS

One of Nietzsche's many nice quotes is (paraphrased): Consciousness
is like the army commander who takes responsibility for the
largely-autonomous actions of his troops.

Recall also Gazzaniga's work on split-brain patients, for insight into
the illusionary nature of many human explanations of reasons for
actions.

The process of explaining why we have done what we have done is an
important aspect of human intelligence -- but not because it is
accurate, it almost never is  More because this sort of
storytelling helps us to structure our future actions (though
generally in ways we cannot accurately understand or explain ;-)

Some of the discussion here is relevant

http://www.goertzel.org/dynapsyc/2004/FreeWill.htm

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!



Explicitly articulating and
explaining knowledge about how to prove is hard...


:-)  And your point is?:-)

Yes, compressing one's knowledge into comprehensible form for communication 
to others is *very* hard.  On the other hand, can you say that you really 
understand something if you can't explain it?  Or, alternatively, can you 
really use the knowledge to it's fullest extent, if you don't understand it 
well enough to be able to explain it.



The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it


Yes.  Again, I agree.  And your point is?  Sometimes we *are* just stupid 
reflexive (or pattern-matching) machines.  At those moments, we aren't 
intelligent.



OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...


Yes, and not so oddly enough, our ability to explain is very highly 
correlated with that purported measure of intelligence called the IQ.



So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.


:-)  I never, ever argued that it couldn't serve as part of an AGI -- just 
not be the entire core.  I expect many peripheral senses and other low-level 
input processors to employ pattern-matching and statistical algorithms.



However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...


Which translated into English says that Novamente will be able to explain 
itself -- thus putting itself into my potential AGI camp, not the dead-end 
statistical-only camp.



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis



On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
 Philip Goetz gave an example of an intrusion detection system that 
 learned

 information that was not comprehensible to humans.  You argued that he
 could
 have understood it if he tried harder.

No, I gave five separate alternatives most of which put the blame on 
the
system for not being able to compress it's data pattern into knowledge 
and

explain it to Philip.


But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!

Consider the case of mathematical proof.  Given a tricky theorem to
prove, I can show students the correct approach.  But my knowledge of
**why** I take the strategy I do, is a lot tougher to communicate.
Most of advanced math education is about learning by example -- you
show the student a bunch of proofs and hope they pick up the spirit of
how to prove stuff in various domains.  Explicitly articulating and
explaining knowledge about how to prove is hard...

The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it

OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...

So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.

However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, nearly all 
decisions are made as reflexes or pattern-matchings on what is effectively 
compiled knowledge; however, it is the structuring of future actions that make 
us the learning, intelligent entities that we are.

 The process of explaining why we have done what we have done is an
 important aspect of human intelligence -- but not because it is
 accurate, it almost never is  More because this sort of
 storytelling helps us to structure our future actions (though
 generally in ways we cannot accurately understand or explain ;-)

Explaining our actions is the reflective part of our minds evaluating the 
reflexive part of our mind.  The reflexive part of our minds, though, operates 
analogously to a machine running on compiled code with the compilation of code 
being largely *not* under the control of our conscious mind (though some degree 
of this *can* be changed by our conscious minds).  The more we can correctly 
interpret and affect/program the reflexive part of our mind with the reflective 
part, the more intelligent we are.  And, translating this back to the machine 
realm circles back to my initial point, the better the machine can explain it's 
reasoning and use it's explanation to improve it's future actions, the more 
intelligent the machine is (or, in reverse, no explanation = no intelligence).

- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 12:17 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis


 We're reaching the point of agreeing to disagree except . . . .

 Are you really saying that nearly all of your decisions can't be explained
 (by you)?
 
 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS
 
 One of Nietzsche's many nice quotes is (paraphrased): Consciousness
 is like the army commander who takes responsibility for the
 largely-autonomous actions of his troops.
 
 Recall also Gazzaniga's work on split-brain patients, for insight into
 the illusionary nature of many human explanations of reasons for
 actions.
 
 The process of explaining why we have done what we have done is an
 important aspect of human intelligence -- but not because it is
 accurate, it almost never is  More because this sort of
 storytelling helps us to structure our future actions (though
 generally in ways we cannot accurately understand or explain ;-)
 
 Some of the discussion here is relevant
 
 http://www.goertzel.org/dynapsyc/2004/FreeWill.htm
 
 -- Ben
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the structuring of future
actions that make us the learning, intelligent entities that we are.

...

Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our conscious
mind (though some degree of this *can* be changed by our conscious minds).
The more we can correctly interpret and affect/program the reflexive part of
our mind with the reflective part, the more intelligent we are.


Mark, let me try to summarize in a nutshell the source of our disagreement.

You partition intelligence into

* explanatory, declarative reasoning

* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
There is a needed distinctintion that must be made here about hunger as a goal 
stack motivator.

We CANNOT change the hunger sensation, (short of physical manipuations, or 
mind-control stuff) as it is a given sensation that comes directly from the 
physical body. 

What we can change is the placement in the goal stack, or the priority position 
it is given.  We CAN choose to put it on the bottom of our list of goals, or 
remove it from teh list and try and starve ourselves to death.
  Our body will then continuosly send the hunger signals to us, and we must 
decide what how to handle that signal.

So in general, the Signal is there, but the goal is not, it is under our 
control.

James Ratcliff


Matt Mahoney [EMAIL PROTECTED] wrote: 
--- Mark Waser  wrote:

  You cannot turn off hunger or pain.  You cannot
  control your emotions.
 
 Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
 at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

  Since the synaptic weights cannot be altered by
  training (classical or operant conditioning)
 
 Who says that synaptic weights cannot be altered?  And there's endless 
 irrefutable evidence that the sum of synaptic weights is certainly 
 constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Any questions?  Get answers on any topic at Yahoo! Answers. Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  Alot has been thrown around here about Top-Level goals, but no real 
definition has been given, and I am confused as it seems to be covering alot of 
ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that Staying Alive is a big one, but that appears to contain 
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire 
them atall?

Our goal of staying alive appears to be biologically preferred or something 
like that, but can definetly be overridden by depression / saving a person in a 
burning building.

James Ratcliff

Ben Goertzel [EMAIL PROTECTED] wrote: IMO, humans **can** reprogram their 
top-level goals, but only with
difficulty.  And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation.  This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that all things are derivable as it would in typical predicate
logic.)

Those of us who seek to become as logically consistent as possible,
given the limitations of our computational infrastructure have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons.  We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own  One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney  wrote:

 --- Mark Waser  wrote:

   You cannot turn off hunger or pain.  You cannot
   control your emotions.
 
  Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100%
  at the mercy of your emotions?

 Why must you argue with everything I say?  Is this not a sensible statement?

   Since the synaptic weights cannot be altered by
   training (classical or operant conditioning)
 
  Who says that synaptic weights cannot be altered?  And there's endless
  irrefutable evidence that the sum of synaptic weights is certainly
  constantly altering by the directed die-off of neurons.

 But not by training.  You don't decide to be hungry or not, because animals
 that could do so were removed from the gene pool.

 Is this not a sensible way to program the top level goals for an AGI?


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

You partition intelligence into
* explanatory, declarative reasoning
* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...


Hmmm.  I will certainly agree that most long-term unconscious thinking is 
actually closer to conscious thinking than most people believe (with the 
only real difference being that there isn't a self-reflective overseer --  
or, at least, not one whose memories we can access).


But -- I don't partition intelligence that way.  I see those as two 
endpoints with a continuum between them (or, a lot of low-level transparent 
switching between the two).


We certainly do have a disagreement in terms of the quantity of knowledge 
that is *in real time* actually behind a decision (as opposed to compiled 
knowledge) -- Me being in favor of mostly compiled knowledge and you being 
in favor of constantly using all of the data.


But I'm not at all sure how important that difference is . . . .  With the 
brain being a massively parallel system, there isn't necessarily a huge 
advantage in compiling knowledge (I can come up with both advantages and 
disadvantages) and I suspect that there are more than enough surprises that 
we have absolutely no way of guessing where on the spectrum of compilation 
vs. not the brain actually is.


On the other hand, I think that lack of compilation is going to turn out to 
be a *very* severe problem for non-massively parallel systems



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:00 PM
Subject: Re: Re: Re: Re: Re: [agi] A question on the symbol-system 
hypothesis




 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, 
nearly

all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the structuring of future
actions that make us the learning, intelligent entities that we are.

...

Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our 
conscious
mind (though some degree of this *can* be changed by our conscious 
minds).
The more we can correctly interpret and affect/program the reflexive part 
of

our mind with the reflective part, the more intelligent we are.


Mark, let me try to summarize in a nutshell the source of our 
disagreement.


You partition intelligence into

* explanatory, declarative reasoning

* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term drive to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff [EMAIL PROTECTED] wrote:

Ok,
  Alot has been thrown around here about Top-Level goals, but no real
definition has been given, and I am confused as it seems to be covering alot
of ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that Staying Alive is a big one, but that appears to contain
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire
them atall?

Our goal of staying alive appears to be biologically preferred or
something like that, but can definetly be overridden by depression / saving
a person in a burning building.

James Ratcliff


Ben Goertzel [EMAIL PROTECTED] wrote:
 IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation. This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that all things are derivable as it would in typical predicate
logic.)

Those of us who seek to become as logically consistent as possible,
given the limitations of our computational infrastructure have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons. We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney wrote:

 --- Mark Waser wrote:

   You cannot turn off hunger or pain. You cannot
   control your emotions.
 
  Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
  at the mercy of your emotions?

 Why must you argue with everything I say? Is this not a sensible
statement?

   Since the synaptic weights cannot be altered by
   training (classical or operant conditioning)
 
  Who says that synaptic weights cannot be altered? And there's endless
  irrefutable evidence that the sum of synaptic weights is certainly
  constantly altering by the directed die-off of neurons.

 But not by training. You don't decide to be hungry or not, because animals
 that could do so were removed from the gene pool.

 Is this not a sensible way to program the top level goals for an AGI?


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!
http://www.falazar.com/projects/Torrents/tvtorrents_show.php

 
Need a quick answer? Get one in minutes from people who know. Ask your
question on Yahoo! Answers.
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

But I'm not at all sure how important that difference is . . . .  With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in compiling knowledge (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
we have absolutely no way of guessing where on the spectrum of compilation
vs. not the brain actually is.


Neuroscience makes clear that most of human long-term memory is
actually constructive and inventive rather than strictly recollective,
see e.g. Israel Rosenfield's nice book The Invention of Memory

www.amazon.com/ Invention-Memory-New-View-Brain/dp/0465035922

as well as a lot of more recent research  So the knowledge that is
compiled in the human brain, is compiled in a way that assumes
self-organizing and creative cognitive processes will be used to
extract and apply it...

IMO in an AGI system **much** knowledge must also be stored/retrieved
in this sort of way (where retrieval is construction/invention).  But
AGI's will also have more opportunity than the normal human brain to
use idiot-savant-like precise computer-like memory when
appropriate...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:

 Why must you argue with everything I say?  Is this not a sensible
 statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
I think, therefore I am.

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
I think, therefore I am.

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as turning off hunger or pain becomes
possible, from a subjective experiential perspective.

I don't know if the physiological correlates of such experiences have
been studied.

Relatedly, though, I do know that physiological correlates of the
experience of stopping breathing that many meditators experience
have been found -- and the correlates were simple: when they thought
they were stopping breathing, the meditators were, in fact, either
stopping or drastically slowing their breathing...

Human potential goes way beyond what is commonly assumed based on our
ordinary states of mind ;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger 
as a goal stack motivator.


We CANNOT change the hunger sensation, (short of physical 
manipuations, or mind-control stuff) as it is a given sensation that 
comes directly from the physical body.


What we can change is the placement in the goal stack, or the priority 
position it is given.  We CAN choose to put it on the bottom of our 
list of goals, or remove it from teh list and try and starve ourselves 
to death.
  Our body will then continuosly send the hunger signals to us, and we 
must decide what how to handle that signal.


So in general, the Signal is there, but the goal is not, it is under 
our control.


James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals above a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
satisfy hunger is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the automatic 
execution of tasks required to achieve the goal to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.


Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:

 The statement, You cannot turn off hunger or pain is sensible.
 In fact, it's one of the few statements in the English language that
 is LITERALLY so.  Philosophically, it's more certain than
 I think, therefore I am.

 If you maintain your assertion, I'll put you in my killfile, because
 we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as turning off hunger or pain becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Philip Goetz

On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote:

 This sounds very Searlian.  The only test you seem to be referring to
 is the Chinese Room test.

You misunderstand.  The test is being able to form cognitive structures that
can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Philip Goetz

On 12/1/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:

On Friday 01 December 2006 20:06, Philip Goetz wrote:

 Thus, I don't think my ability to follow rules written on paper to
 implement a Turing machine proves that the operations powering my
 consciousness are Turing-complete.

Actually, I think it does prove it, since your simulation of a Turing machine
would consist of conscious operations.


But the simulation of a Chinese speaker, carried out by the man in Searle's
Chinese room, consists of conscious operations.

If I simulate a Turing machine in that way, then the system consisting of
me plus a rulebook and some slips of paper is Turing-complete.
If you conclude that my conscious mind is thus Turing-complete,
you must be identifying my conscious mind with the consciousness
of the system consisting of me plus a rulebook and some slips of paper.
If you do that, then in the case of the Chinese room, you must also
identify my conscious mind with the consciousness
of the system consisting of me plus a rulebook and some slips of paper.
Then you arrive at Searle's conclusion:  Either I must be conscious of
speaking Chinese, or merely following an algorithm that results in
speaking Chinese does not entail consciousness, and hence
a simulation of consciousness might be perfect, but isn't
necessarily conscious.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

Consider as a possible working definition:
A goal is the target state of a homeostatic system.  (Don't take 
homeostatic too literally, though.)


Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal 
is to change to room temperature to be not less than 67 degrees 
Fahrenheit.  (I'm assuming that the thermostat allows a 6 degree heat 
swing, heats until it senses 73 degrees, then turns off the heater until 
the temperature drops below 67 degrees.)


Thus, the goal is the target at which a system (or subsystem) is aimed.

Note that with this definition goals do not imply intelligence of more 
than the most very basic level.  (The thermostat senses it's environment 
and reacts to adjust it to suit it's goals, but it has no knowledge of 
what it is doing or why, or even THAT it is doing it.)  One could 
reasonably assert that the intelligence of the thermostat is, or at 
least has been, embodied outside the thermostat.  I'm not certain that 
this is useful, but it's reasonable, and if you need to tie goals into 
intelligence, then adopt that model.



James Ratcliff wrote:
Can we go back to a simpler distictintion then, what are you defining 
Goal as?


I see the goal term, as a higher level reasoning 'tool'
Wherin the body is constantly sending signals to our minds, but the 
goals are all created consciously or semi-conscisly.


Are you saying we should partition the Top-Level goals into some 
form of physical body - imposed goals and other types, or
do you think we should leave it up to a single Constroller to 
interpret the signals coming from teh body and form the goals.


In humans it looks to be the one way, but with AGI's it appears it 
would/could be another.


James

*/Charles D Hixson [EMAIL PROTECTED]/* wrote:

J...
Goals are important. Some are built-in, some are changeable. Habits
are also important, perhaps nearly as much so. Habits are initially
created to satisfy goals, but when goals change, or circumstances
alter,
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 
http://us.rd.yahoo.com/evt=45083/*http://advision.webevents.yahoo.com/mailbeta 



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
   Can you not concentrate on something else enough that you no longer feel 
hunger?  How many people do you know that have forgotten to eat for hours 
at a time when sucked into computer games or other activities?


   Is the same not true of pain?  Have you not heard of yogis that have 
trained their minds to concentrate strongly enough that even the most severe 
of discomfort is ignored?  How is this not turning off pain?  If you're 
going to argue that the nerves are still firing and further that the mere 
fact of nerves firing is relevant to the original argument, then  feel free 
to killfile me.  The original point was that humans are *NOT* absolute 
slaves to hunger and pain.


   Are you
   a) arguing that humans *ARE* absolute slaves to hunger and pain
   OR
   b) are you beating me up over a trivial sub-point that isn't 
connected back to the original argument?


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:

 Why must you argue with everything I say?  Is this not a sensible
 statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
I think, therefore I am.

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
You misunderstand.  The test is being able to form cognitive structures 
that

can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.


   Show me an example of where/how your pattern matcher uses the cognitive 
structures it derives as a basis for future, more complicated cognitive 
structures.  (My assumption is that) There is no provision for that in your 
code and that the system is too simple for it to evolve spontaneously.  Are 
you actually claiming that your system does form cognitive structures that 
can serve as the basis for later more complicated cognitive structures?


   Why do you keep throwing around the Searlian buzzword/pejorative? 
Previous discussions on this mailing list have made it quite clear that the 
people on this list don't even agree on what it means much less what it's 
implications are . . . .


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote:

 This sounds very Searlian.  The only test you seem to be referring to
 is the Chinese Room test.

You misunderstand.  The test is being able to form cognitive structures 
that

can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser

To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.


The first sentence of the proposition was exactly You cannot turn off 
hunger. (i.e. not that not everyone can turn them off)


My response is I certainly can -- not permanently, but certainly so 
completely that I am not aware of it for hours at a time and further that I 
don't believe that I am at all unusual in this regard.



- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:01 PM
Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is 
it and how fast?]




On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:

 The statement, You cannot turn off hunger or pain is sensible.
 In fact, it's one of the few statements in the English language that
 is LITERALLY so.  Philosophically, it's more certain than
 I think, therefore I am.

 If you maintain your assertion, I'll put you in my killfile, because
 we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as turning off hunger or pain becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Eric Baum

Matt --- Hank Conn [EMAIL PROTECTED] wrote:

 On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:  The goals
 of humanity, like all other species, was determined by 
 evolution.   It is to propagate the species.
 
 
 That's not the goal of humanity. That's the goal of the evolution
 of humanity, which has been defunct for a while.

Matt We have slowed evolution through medical advances, birth control
Matt and genetic engineering, but I don't think we have stopped it
Matt completely yet.

I don't know what reason there is to think we have slowed
evolution, rather than speeded it up.

I would hazard to guess, for example, that since the discovery of 
birth control, we have been selecting very rapidly for people who 
choose to have more babies. In fact, I suspect this is one reason
why the US (which became rich before most of the rest of the world)
has a higher birth rate than Europe.

Likewise, I expect medical advances in childbirth etc are selecting
very rapidly for multiple births (which once upon a time often killed 
off mother and child.) I expect this, rather than or in addition to
the effects of fertility drugs, is the reason for the rise in 
multiple births.

etc.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Philip Goetz [EMAIL PROTECTED] wrote:

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


Richard Loosemore told me that I'm overreacting.  I can tell that I'm
overly emotional over this, so it might be true.  Sorry for flaming.
I am bewildered by Mark's statement, but I will look for a
less-inflammatory way of saying so next time.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  That is a start, but you dont have a difference there between externally 
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited) 
goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do we 
model them in such a way that they coexist with the internally created goals.

I have worked on the rudiments of an AGI system, but am having trouble defining 
its internal goal systems.

James Ratcliff


Ben Goertzel [EMAIL PROTECTED] wrote: Regarding the definition of goals and 
supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term drive to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff  wrote:
 Ok,
   Alot has been thrown around here about Top-Level goals, but no real
 definition has been given, and I am confused as it seems to be covering alot
 of ground for some people.

 What 'level' and what are these top level goals for humans/AGI's?

 It seems that Staying Alive is a big one, but that appears to contain
 hunger/sleep/ and most other body level needs.

 And how hard-wired are these goals, and how (simply) do we really hard-wire
 them atall?

 Our goal of staying alive appears to be biologically preferred or
 something like that, but can definetly be overridden by depression / saving
 a person in a burning building.

 James Ratcliff


 Ben Goertzel  wrote:
  IMO, humans **can** reprogram their top-level goals, but only with
 difficulty. And this is correct: a mind needs to have a certain level
 of maturity to really reflect on its own top-level goals, so that it
 would be architecturally foolish to build a mind that involved
 revision of supergoals at the infant/child phase.

 However, without reprogramming our top-level goals, we humans still
 have a lot of flexibility in our ultimate orientation. This is
 because we are inconsistent systems: our top-level goals form a set of
 not-entirely-consistent objectives... so we can shift from one
 wired-in top-level goal to another, playing with the inconsistency.
 (I note that, because the logic of the human mind is probabilistically
 paraconsistent, the existence of inconsistency does not necessarily
 imply that all things are derivable as it would in typical predicate
 logic.)

 Those of us who seek to become as logically consistent as possible,
 given the limitations of our computational infrastructure have a
 tough quest, because the human mind/brain is not wired for
 consistency; and I suggest that this inconsistency pervades the human
 wired-in supergoal set as well...

 Much of the inconsistency within the human wired-in supergoal set has
 to do with time-horizons. We are wired to want things in the short
 term that contradict the things we are wired to want in the
 medium/long term; and each of our mind/brains' self-organizing
 dynamics needs to work out these evolutionarily-supplied
 contradictions on its own One route is to try to replace our
 inconsistent initial wiring with a more consistent supergoal set; the
 more common route is to oscillate chaotically from one side of the
 contradiction to the other...

 (Yes, I am speaking loosely here rather than entirely rigorously; but
 formalizing all this stuff would take a lot of time and space...)

 -- Ben F


 On 12/3/06, Matt Mahoney wrote:
 
  --- Mark Waser wrote:
 
You cannot turn off hunger or pain. You cannot
control your emotions.
  
   Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
   at the mercy of your emotions?
 
  Why must you argue with everything I say? Is this not a sensible
 statement?
 
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
  
   Who says that synaptic weights cannot be altered? And there's endless
   irrefutable evidence that the sum of synaptic weights is certainly
   constantly altering by the directed die-off of neurons.
 
  But not by training. You don't decide to be hungry or not, because animals
  that could do so were removed from the gene pool.
 
  Is this not a sensible way to program the top level goals for an AGI?
 
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303



 

Re: Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

For a baby AGI, I would force the physiological goals, yeah.

In practice, baby Novamente's only explicit goal is getting rewards
from its teacher  Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes  It's simulation world is friendly in the
sense that it doesn't currently need to take any specific actions in
order just to stay alive...

-- Ben

On 12/4/06, James Ratcliff [EMAIL PROTECTED] wrote:

Ok,
  That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you
cited) goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do
we model them in such a way that they coexist with the internally created
goals.

I have worked on the rudiments of an AGI system, but am having trouble
defining its internal goal systems.

James Ratcliff


Ben Goertzel [EMAIL PROTECTED] wrote:
 Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term drive to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff wrote:
 Ok,
 Alot has been thrown around here about Top-Level goals, but no real
 definition has been given, and I am confused as it seems to be covering
alot
 of ground for some people.

 What 'level' and what are these top level goals for humans/AGI's?

 It seems that Staying Alive is a big one, but that appears to contain
 hunger/sleep/ and most other body level needs.

 And how hard-wired are these goals, and how (simply) do we really
hard-wire
 them atall?

 Our goal of staying alive appears to be biologically preferred or
 something like that, but can definetly be overridden by depression /
saving
 a person in a burning building.

 James Ratcliff


 Ben Goertzel wrote:
 IMO, humans **can** reprogram their top-level goals, but only with
 difficulty. And this is correct: a mind needs to have a certain level
 of maturity to really reflect on its own top-level goals, so that it
 would be architecturally foolish to build a mind that involved
 revision of supergoals at the infant/child phase.

 However, without reprogramming our top-level goals, we humans still
 have a lot of flexibility in our ultimate orientation. This is
 because we are inconsistent systems: our top-level goals form a set of
 not-entirely-consistent objectives... so we can shift from one
 wired-in top-level goal to another, playing with the inconsistency.
 (I note that, because the logic of the human mind is probabilistically
 paraconsistent, the existence of inconsistency does not necessarily
 imply that all things are derivable as it would in typical predicate
 logic.)

 Those of us who seek to become as logically consistent as possible,
 given the limitations of our computational infrastructure have a
 tough quest, because the human mind/brain is not wired for
 consistency; and I suggest that this inconsistency pervades the human
 wired-in supergoal set as well...

 Much of the inconsistency within the human wired-in supergoal set has
 to do with time-horizons. We are wired to want things in the short
 term that contradict the things we are wired to want in the
 medium/long term; and each of our mind/brains' self-organizing
 dynamics needs to work out these evolutionarily-supplied
 contradictions on its own One route is to try to replace our
 inconsistent initial wiring with a more consistent supergoal set; the
 more common route is to oscillate chaotically from one side of the
 contradiction to the other...

 (Yes, I am speaking loosely here rather than entirely rigorously; but
 formalizing all this stuff would take a lot of time and space...)

 -- Ben F


 On 12/3/06, Matt Mahoney wrote:
 
  --- Mark Waser wrote:
 
You cannot turn off hunger or pain. You cannot
control your emotions.
  
   Huh? Matt, can you really not ignore hunger or pain? Are you really
100%
   at the mercy of your emotions?
 
  Why must you argue with everything I say? Is this not a sensible
 statement?
 
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
  
   Who says that synaptic weights cannot be altered? And there's endless
   irrefutable evidence that the sum of synaptic weights is certainly
   constantly altering by the directed die-off of neurons.
 
  But not by training. You don't decide to be hungry or not, because
animals
  that could do so were removed from the gene pool.
 
  Is this 

Re: [agi] Addiction was Re: Motivational Systems of an AI

2006-12-04 Thread Mark Waser

But wouldn't you say humans can wirehead themselves, as shown by
addiction? So at least we have an existence proof that a wirehead
capable system can be a general intelligence.


Oh.  Absolutely.  I meant terrible because it could lead to bad consequences 
if you designed it that way -- not because it would be a bad design for 
succeeding in creating an AGI.  It may well be that a wirehead-capable 
system is the *easiest* way (or possibly, the *only* way) to create a 
general intelligence



Might there not be a reason for evolution having adopted such a
system?


I suspect that there is.  That's why I'm certainly willing to concede that 
it may well be that a wirehead-capable system is the *easiest* way (or 
possibly, the *only* way) to create a general intelligence.  I just would 
prefer to avoid this type of system if it is at all possible.


- Original Message - 
From: William Pearson [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 5:51 PM
Subject: [agi] Addiction was Re: Motivational Systems of an AI



On 04/12/06, Mark Waser [EMAIL PROTECTED] wrote:

 Why must you argue with everything I say?  Is this not a sensible
 statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.

 You don't decide to be hungry or not, because animals
 that could do so were removed from the gene pool.

Funny, I always thought that it was the animals that continued eating 
while
being stalked were the ones that were removed from the gene pool 
(suddenly

and bloodily).  Yes, you eventually have to feed yourself or you die and
animals mal-adapted enough to not feed themselves will no longer 
contribute

to the gene pool, but can you disprove the equally likely contention that
animals eat because it is very pleasurable to them and that they never 
feel

hunger (or do you only have sex because it hurts when you don't)?

 Is this not a sensible way to program the top level goals for an AGI?

No.  It's a terrible way to program the top level goals for an AGI.  It
leads to wireheading, short-circuiting of true goals for faking out the
evaluation criteria, and all sorts of other problems.



But wouldn't you say humans can wirehead themselves, as shown by
addiction? So at least we have an existence proof that a wirehead
capable system can be a general intelligence.

Might there not be a reason for evolution having adopted such a
system? I would argue that certain classes of reinforcement system are
preferable for an intelligent system, because they require the least
commitment about what is in the world and how to recognise it. That is
all they need do is define what is good and bad, and the rest of
knowledge about the world can be adapted (note I would put lots of
information in the system, I am more referring to how it can change)
rather than fixed. I am mainly referring to goal stack kinds of
architectures, I'm not sure how much commitment Richard Loosemore's
many small contraints system makes about what is in the world.

Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303