Re: [agi] A question on the symbol-system hypothesis

2006-12-26 Thread Philip Goetz

On 12/2/06, Matt Mahoney [EMAIL PROTECTED] wrote:


I know a little about network intrusion anomaly detection (it was my
dissertation topic), and yes it is an important lessson.

The reason such anomalies occur is
because when attackers craft exploits, they follow enough of the protocol to
make it work but often don't care about the undocumented conventions followed
by normal servers and clients.  For example, they may use lower case commands
where most software uses upper case, or they may put unusual but legal values
in the TCP or IP-ID fields or a hundred other things that make the attack
stand out.


Yes, that's what I eventually concluded - but I concluded it by studying
the input data, not by studying the system's internal data.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-14 Thread Ricardo Barreira

On 12/13/06, Philip Goetz [EMAIL PROTECTED] wrote:

On 12/5/06, BillK [EMAIL PROTECTED] wrote:

It is a little annoying that he doesn't mention Damasio at all, when
Damasio has been pushing this same thesis for nearly 20 years, and
even popularized it in Descartes' Error.

(Disclaimer: I didn't read The Emotion Machine; my computer read it for me.)


He does mention António Damásio in chapter 7:

http://web.media.mit.edu/~minsky/E7/eb7.html

Search for damasio there. It's just a small mention to one of the
examples given in Descartes Error...

Ricardo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-13 Thread Philip Goetz

On 12/5/06, BillK [EMAIL PROTECTED] wrote:

The good news is that Minsky appears to be making the book available
online at present on his web site. *Download quick!*

http://web.media.mit.edu/~minsky/
See under publications, chapters 1 to 9.
The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7 8 9 )


It is a little annoying that he doesn't mention Damasio at all, when
Damasio has been pushing this same thesis for nearly 20 years, and
even popularized it in Descartes' Error.

(Disclaimer: I didn't read The Emotion Machine; my computer read it for me.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/4/06, Mark Waser  wrote:


Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our conscious
mind (though some degree of this *can* be changed by our conscious minds).
The more we can correctly interpret and affect/program the reflexive part of
our mind with the reflective part, the more intelligent we are.  And,
translating this back to the machine realm circles back to my initial point,
the better the machine can explain it's reasoning and use it's explanation
to improve it's future actions, the more intelligent the machine is (or, in
reverse, no explanation = no intelligence).



Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway.  Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty

On 12/5/06, BillK [EMAIL PROTECTED] wrote:


Your reasoning is getting surreal.

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

What's the point? - I think that's an even better question than defining

degrees of local rationality (good) vs irrationality (bad)  The whole notion
of arbitrarily defining subjective terms as good or better or bad seems
foolish.

If we're going to talk about evolutionary psychology as a motivator for
actions and attribute reactions to stimuli or enviornmental pressures then
it seems egocentric to apply labels like rational to any of the
observations.

Within the scope of these discussions, we put ourselves in a superior
non-human point of view where we can discuss the human decisions like
animals in a zoo.  For some threads it is useful to approach the subject
that way.  For most it illustrates a particular trait of the biased
selection of those humans who participate in this list.

hmm...  just an observation...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser

Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).


Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses.  Do you really want to contend the 
opposite?



You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time.


You're reading something into my statements that I certainly don't mean to 
be there.  Humans behave irrationally a lot of the time.  I consider this 
fact a defect or shortcoming in their intelligence (or make-up).  Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.



Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.


Yup.  Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects.  Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom.  For many bright people, they *do* know all of what 
you're saying and they still go ahead.  This is certainly some form of 
defect, I'm not sure where you'd classify it though.



Human decisions and activities are mostly emotional and irrational.


I think that this depends upon the person.  For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational.  I believe that there are some humans where this is not the 
case.



That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.


Yup, we've evolved to be at least minimally functional though not optimal.


An AGI will have to cope with this mess.


Yes, so far I'm in total agreement with everything you've said . . . .


Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


. . . until now where you make an unsupported blanket statement that doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).


Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning.  Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational?  It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).


And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).


I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion.  That just isn't the case.  Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely to be congruent with them (and even 
more so in well-balanced and happy individuals).




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 12/4/06, Mark Waser  wrote:


Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our 
conscious
mind (though some degree of this *can* be changed by our conscious 
minds).
The more we can correctly interpret and affect/program the reflexive part 
of

our mind with the reflective part, the more intelligent we are.  And,
translating this back to the machine realm circles back to my initial 
point,
the better the machine can

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
Talk about fortuitous timing . . . . here's a link on Marvin Minsky's latest 
about emotions and rational thought

http://www.boston.com/news/globe/health_science/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/

The most relevant line to our conversation is Called The Emotion Machine, it 
argues that, contrary to popular conception, emotions aren't distinct from 
rational thought; rather, they are simply another way of thinking, one that 
computers could perform.

- Original Message - 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 10:05 AM
Subject: Re: [agi] A question on the symbol-system hypothesis


 Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).
 
 Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
 intelligence to come up with excuses and that more intelligent people can 
 come up with more and better excuses.  Do you really want to contend the 
 opposite?
 
 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.
 
 You're reading something into my statements that I certainly don't mean to 
 be there.  Humans behave irrationally a lot of the time.  I consider this 
 fact a defect or shortcoming in their intelligence (or make-up).  Just 
 because humans have a shortcoming doesn't mean that another intelligence 
 will necessarily have the same shortcoming.
 
 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.
 
 Yup.  Humans are not as intelligent as they could be.  Generally, they place 
 way too much weight on near-term effect and not enough weight on long-term 
 effects.  Actually, though, I'm not sure whether you classify that as 
 intelligence or wisdom.  For many bright people, they *do* know all of what 
 you're saying and they still go ahead.  This is certainly some form of 
 defect, I'm not sure where you'd classify it though.
 
 Human decisions and activities are mostly emotional and irrational.
 
 I think that this depends upon the person.  For the majority of humans, 
 maybe -- but I'm not willing to accept this as applying to each individual 
 human that their decisions and activities are mostly emotional and 
 irrational.  I believe that there are some humans where this is not the 
 case.
 
 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.
 
 Yup, we've evolved to be at least minimally functional though not optimal.
 
 An AGI will have to cope with this mess.
 
 Yes, so far I'm in total agreement with everything you've said . . . .
 
 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.
 
 . . . until now where you make an unsupported blanket statement that doesn't 
 appear to me at all related to any of the above (and which may be entirely 
 accurate or inaccurate based upon what you mean by ruthless -- but I believe 
 that it would take a very contorted definition of ruthless to make it 
 accurate -- though inhuman should obviously be accurate).
 
 Part of the problem is that 'rationality' is a very emotion-laden term with 
 a very slippery meaning.  Is doing something because you really, really want 
 to despite the fact that it most probably will have bad consequences really 
 irrational?  It's not a wise choice but irrational is a very strong term . . 
 . . (and, as I pointed out previously, such a decision *is* rationally made 
 if you have bad weighting in your algorithm -- which is effectively what 
 humans have -- or not, since it apparently has been evolutionarily selected 
 for).
 
 And logic isn't necessarily so iron if the AGI has built-in biases for 
 conversation and relationships (both of which are rationally derivable from 
 it's own self-interest).
 
 I think that you've been watching too much Star Trek where logic and 
 rationality are the opposite of emotion.  That just isn't the case.  Emotion 
 can be (and is most often noted when it is) contrary to logic and 
 rationality -- but it is equally likely to be congruent with them (and even 
 more so in well-balanced and happy individuals).
 
 
 
 - Original Message - 
 From: BillK [EMAIL PROTECTED]
 To: agi@v2.listbox.com

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff


BillK [EMAIL PROTECTED] wrote: On 12/4/06, Mark Waser  wrote:

 Explaining our actions is the reflective part of our minds evaluating the
 reflexive part of our mind.  The reflexive part of our minds, though,
 operates analogously to a machine running on compiled code with the
 compilation of code being largely *not* under the control of our conscious
 mind (though some degree of this *can* be changed by our conscious minds).
 The more we can correctly interpret and affect/program the reflexive part of
 our mind with the reflective part, the more intelligent we are.  And,
 translating this back to the machine realm circles back to my initial point,
 the better the machine can explain it's reasoning and use it's explanation
 to improve it's future actions, the more intelligent the machine is (or, in
 reverse, no explanation = no intelligence).


Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway.  Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK

You just rationlized the reasons for human choice in your above arguement 
yourself :}
MOST humans act rationaly MOST of the time.  They may not make 'good' 
decisions, but they are rational ones, if you  decides to sleep with your best 
friends wife, you do so because you are attracted and you want her, and you 
rationlize you will probably not get caught.  You have stated the reasons, and 
you move ahead with that plan.
  Vague stuff you cant rationalize easily is why you like the appearance of 
someones face, or why you like this flavor of ice cream.  Those are hard to 
rationalize, but much of our behaviour is easier.
  Now about building a rational vs non-rational AGI, how would you go about 
modeling a non-rational part of it?  Short of a random number generator?

  For the most part we Do want a rational AGI, and it DOES need to explain 
itself.  One fo the first tasks of AGI will be to replace all of the current 
expert systems in fields like medicine.  
  For these it is not merely good enough to say, (as a Doctor AGI) I think he 
has this cancer, and you should treat him with this strange procedure.  There 
must be an accounting that it can present to other doctors and say, yes, I 
noticed a coorelation between these factors that lead me to believe this, with 
this certainty.  An early AI must also proove its merit by explaining what it 
is doing to build up a level of trust.
   Further, it is important in another fashion, in that we can turn around and 
use these smart AI's to further train other Doctors or specialists with the 
AGI's explainations.

Now for some tasks it will not be able to do this, or not within a small amount 
of data and explanations.  The level that it is able to generalize this 
information will reflect its usefullness and possibly intelligence.

In the Halo expirement for the Chemistry API, they were graded not only on 
correct answers but also in their explanations of how they got to those answers.
Some of the explanations were short concise and well reasoned, some fo them 
though, went down to a very basic level of detail and lasted for a couple of 
pages.

If you are flying to Austin, and asking a AGI to plan your route, and it 
chooses a Airline that sounds dodgy that you have never heard of, mainly 
because it was cheap or some other reasoning, you def want to know why it 
choose that, and tell it not to weight that feature as highly.
  For many decisions I believe a small feature set is required, with the larger 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
 Now about building a rational vs non-rational AGI, how would you go about 
 modeling a non-rational part of it?  Short of a random number generator?

Why would you want to build a non-rational AGI?  It seems like a *really* bad 
idea.  I think I'm missing your point here.

 For the most part we Do want a rational AGI, and it DOES need to explain 
 itself.  One fo the first tasks of AGI will be to replace all of the current 
 expert systems in fields like medicine.  

Yep.  That's my argument and you expand it well.

 Now for some tasks it will not be able to do this, or not within a small 
 amount of data and explanations.  The level that it is able to generalize 
 this information will reflect its usefullness and possibly intelligence.

Yep.  You're saying exactly what I'm thinking.

  For many decisions I believe a small feature set is required, with the 
 larger possible features being so lowly weighted as to not have much impact.

This is where Ben and I are sort of having a debate.  I agree with him that the 
brain may well be using the larger number since it is massively parallel and it 
therefore can.  I think that we differ on whether or not the larger is required 
for AGI (Me = No, Ben = Yes) -- which reminds me . . . 

Hey Ben, if the larger number IS required for AGI, how do you intend to do this 
in a computationally feasible way in a non-massively-parallel system?





  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Tuesday, December 05, 2006 11:17 AM
  Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis




  BillK [EMAIL PROTECTED] wrote:
On 12/4/06, Mark Waser wrote:

 Explaining our actions is the reflective part of our minds evaluating the
 reflexive part of our mind. The reflexive part of our minds, though,
 operates analogously to a machine running on compiled code with the
 compilation of code being largely *not* under the control of our conscious
 mind (though some degree of this *can* be changed by our conscious minds).
 The more we can correctly interpret and affect/program the reflexive part 
of
 our mind with the reflective part, the more intelligent we are. And,
 translating this back to the machine realm circles back to my initial 
point,
 the better the machine can explain it's reasoning and use it's explanation
 to improve it's future actions, the more intelligent the machine is (or, 
in
 reverse, no explanation = no intelligence).


Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway. Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK


  You just rationlized the reasons for human choice in your above arguement 
yourself :}
  MOST humans act rationaly MOST of the time.  They may not make 'good' 
decisions, but they are rational ones, if you  decides to sleep with your best 
friends wife, you do so because you are attracted and you want her, and you 
rationlize you will probably not get caught.  You have stated the reasons, and 
you move ahead with that plan.
Vague stuff you cant rationalize easily is why you like the appearance of 
someones face, or why you like this flavor of ice cream.  Those are hard to 
rationalize, but much of our behaviour is easier.
Now about building a rational vs non-rational AGI, how would you go about

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Mark Waser [EMAIL PROTECTED] wrote:  Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).

Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses.  Do you really want to contend the 
opposite?

 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.

You're reading something into my statements that I certainly don't mean to 
be there.  Humans behave irrationally a lot of the time.  I consider this 
fact a defect or shortcoming in their intelligence (or make-up).  Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.

Yup.  Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects.  Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom.  For many bright people, they *do* know all of what 
you're saying and they still go ahead.  This is certainly some form of 
defect, I'm not sure where you'd classify it though.

 Human decisions and activities are mostly emotional and irrational.

I think that this depends upon the person.  For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational.  I believe that there are some humans where this is not the 
case.

 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.

Yup, we've evolved to be at least minimally functional though not optimal.

 An AGI will have to cope with this mess.

Yes, so far I'm in total agreement with everything you've said . . . .

 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.

. . . until now where you make an unsupported blanket statement that doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).

Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning.  Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational?  It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).

And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).

I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion.  That just isn't the case.  Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely to be congruent with them (and even 
more so in well-balanced and happy individuals).



You have hinted around it, but I would go one step further and say that Emotion 
is NOT contrary to logic.  In any way really, they cant be compared like that.  
Logic even 'uses' emotion as imput.  The decisions we make are based on rules 
and facts we know, and our emotions, but still logically.
  What emotions often contradict is our actual ability to make good decicions / 
plans. 
  If we do something stupid because of our anger or emotions, then it still is 
a causal logical explanation.
  So humand and AGI may be irrational, but hopefully not illogical.  If it is 
illogical then that implies it made its decision without any logical reasoning, 
so possibly random.   AGI will need some level of randomness, but not for 
general things.

James Ratcliff


___
James 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
 You have hinted around it, but I would go one step further and say that 
 Emotion is NOT contrary to logic.

:-) I thought that my last statement that emotion is equally likely to be 
congruent with logic and reason was a lot more than a hint (unless congruent 
doesn't mean not contrary like I think/thought it did  :-)

I liked your distinction between illogical and irrational -- though I'm not 
sure that others would agree with your using irrational that way.
  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Tuesday, December 05, 2006 11:34 AM
  Subject: Re: [agi] A question on the symbol-system hypothesis


  Mark Waser [EMAIL PROTECTED] wrote:
 Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).

Sure. Absolutely. I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses. Do you really want to contend the 
opposite?

 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.

You're reading something into my statements that I certainly don't mean to 
be there. Humans behave irrationally a lot of the time. I consider this 
fact a defect or shortcoming in their intelligence (or make-up). Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.

Yup. Humans are not as intelligent as they could be. Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects. Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom. For many bright people, they *do* know all of what 
you're saying and they still go ahead. This is certainly some form of 
defect, I'm not sure where you'd classify it though.

 Human decisions and activities are mostly emotional and irrational.

I think that this depends upon the person. For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational. I believe that there are some humans where this is not the 
case.

 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.

Yup, we've evolved to be at least minimally functional though not optimal.

 An AGI will have to cope with this mess.

Yes, so far I'm in total agreement with everything you've said . . . .

 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.

. . . until now where you make an unsupported blanket statement that 
doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I 
believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).

Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning. Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational? It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).

And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).

I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion. That just isn't the case. Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Yes, I could not find a decent definition of irrational at first:
Amending my statements now...

Using the Wiki basis below: the term is used to describe thinking and actions 
which are, or appear to be, less useful or logical than the rational 
alternatives.

I would remove the 'logical' portion of this, because the examples given below, 
emotions, fads, stock markets.
These decisions are all made useing logic, with emotions contirbuting to a 
choice, or a choice being made because we see others wearing the same clothes, 
or based on our (possibly incorrect) beliefs about what the stock market may do.
  The other possibility is to actually incorrectly use the knowledge.  If I 
have all the rules about a stock that would point to it going down, but I still 
purchase and believe it will go up, I am using the logic incorrectly.

  So possibly irrationality could be amended to be something like: basing a 
decision on faulty information, or incorrectly using logic to arrive at a 
choice.

So for my AGI application, I would indeed then model the irrationality in the 
form of emotions / fads etc, as logical components, and it would implicity be 
irrational becuase it could have faulty information.  And incorrectly using the 
logic it has, would only be done if there was an error.

James
Theories of irrational behavior include:
 
   people's actual interests differ from what they believe to be their interests
This is still logical though, just based on beliefs that are wrong to actual 
interests.


From Wiki: http://en.wikipedia.org/wiki/Irrationality
Irrationality is talking or acting without regard of rationality. Usually 
pejorative, the term is used to describe thinking and actions which are, or 
appear to be, less useful or logical than the rational alternatives. These 
actions tend to be regarded as emotion-driven. There is a clear tendency to 
view our own thoughts, words, and actions as rational and to see those who 
disagree as irrational.
 Types of behavior which are often described as irrational include:
 
   fads and fashions
   crowd behavior
   offense or anger at a situation that has not yet occurred
   unrealistic expectations
   falling victim to confidence tricks
   belief in the supernatural without evidence
   stock-market bubbles
   irrationality caused by mental illness, such as obsessive-compulsive 
disorder, major depressive disorder, and paranoia.

Mark Waser [EMAIL PROTECTED] wrote:You  have hinted around it, but 
I would go one step further and say that Emotion is  NOT contrary to logic.
  
 :-) I thought that my last statement that emotion is equally likely to be  
congruent with logic and reason was a lot more than a hint (unless  
congruent doesn't mean not contrary like I think/thought it did   :-)
  
 I liked your distinction between illogical and  irrational -- though I'm not 
sure that others would agree with your using  irrational that way.
- Original Message - 
   From:James Ratcliff
   To: agi@v2.listbox.com 
   Sent: Tuesday, December 05, 2006 11:34AM
   Subject: Re: [agi] A question on thesymbol-system hypothesis
   

Mark Waser [EMAIL PROTECTED] wrote: Are
 you saying that the more excuses we can think up, the more  intelligent
 we are? (Actually there might be something in  that!).

Sure. Absolutely. I'm perfectly willing to contend that it  takes 
intelligence to come up with excuses and that more intelligent  people can 
come up with more and better excuses. Do you really want to  contend the 
opposite?

 You seem to have a real difficulty in  admitting that humans behave
 irrationally for a lot (most?) of the  time.

You're reading something into my statements that I certainly  don't mean to 
be there. Humans behave irrationally a lot of the time. I  consider this 
fact a defect or shortcoming in their intelligence (or  make-up). Just 
because humans have a shortcoming doesn't mean that  another intelligence 
will necessarily have the same  shortcoming.

 Every time someone (subconsciously) decides to do  something, their
 brain presents a list of reasons to go ahead. The  reasons against are
 ignored, or weighted down to be less preferred.  This applies to
 everything from deciding to get a new job to  deciding to sleep with
 your best friend's wife. Sometimes a case  arises when you really,
 really want to do something that you *know*  is going to end in
 disaster, ruined lives, ruined career, etc. and  it is impossible to
 think of good reasons to proceed. But you still  go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody  will find out, it's
 not all my fault anyway, and so  on.

Yup. Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not  enough weight on long-term 
effects. Actually, though, I'm not sure  whether you classify

Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread BillK

On 12/5/06, Richard Loosemore wrote:


There are so few people who speak up against the conventional attitude
to the [rational AI/irrational humans] idea, it is such a relief to hear
any of them speak out.

I don't know yet if I buy everything Minsky says, but I know I agree
with the spirit of it.

Minsky and Hofstadter are the two AI thinkers I most respect.




The customer reviews on Amazon are rather critical of Minsky's new book.
They seem to be complaining that the book is more of a general
discussion rather than providing detailed specifications for building
an AI engine.  :)
http://www.amazon.com/gp/product/customer-reviews/0743276639/ref=cm_cr_dp_pt/102-3984994-3498561?ie=UTF8n=283155s=books


The good news is that Minsky appears to be making the book available
online at present on his web site. *Download quick!*

http://web.media.mit.edu/~minsky/
See under publications, chapters 1 to 9.
The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7 8 9 )


I like very much Minsky's summing up from the end of the book:


-
All of these kinds of inventiveness, combined with our unique
expressiveness, have empowered our communities to deal with huge
classes of new situations. The previous chapters discussed many
aspects of what gives people so much resourcefulness:

We have multiple ways to describe many things—and can quickly switch
among those different perspectives.
We make memory-records of what we've done—so that later we can reflect on them.
We learn multiple ways to think so that when one of them fails, we can
switch to another.
We split hard problems into smaller parts, and use goal-trees, plans,
and context stacks to help us keep making progress.
We develop ways to control our minds with all sorts of incentives,
threats, and bribes.
We have many different ways to learn and can also learn new ways to learn.
We can often postpone a dangerous action and imagine, instead, what
its outcome might be in some Virtual World.


Our language and culture accumulates vast stores of ideas that were
discovered by our ancestors. We represent these in multiple realms,
with metaphors interconnecting them.

Most every process in the brain is linked to some other processes. So,
while any particular process may have some deficiencies, there will
frequently be other parts that can intervene to compensate.

Nevertheless, our minds still have bugs. For, as our human brains
evolved, each seeming improvement also exposed us to the dangers
making new types of mistakes. Thus, at present, our wonderful powers
to make abstractions also cause us to construct generalizations that
are too broad, fail to deal with exceptions to rules, accumulate
useless or incorrect information, and to believe things because our
imprimers do. We also make superstitious credit assignments, in which
we confuse real thing with ones that we merely imagine; then we become
obsessed with unachievable goals, and set out on unbalanced, fanatical
searches and quests. Some persons become so unwilling to acknowledge a
serious failure or a great loss that they try to relive their lives of
the past. Also, of course, many people suffer from mental disorders
that range from minor incapacities to dangerous states of dismal
depression or mania.

We cannot expect our species to evolve ways to escape from all such
bugs because, as every engineer knows, as every engineer knows, most
every change in a large complex system will introduce yet other
mistakes that won't show up till the system moves to a different
environment. Furthermore, we also face an additional problem: each
human brain differs from the next because, first, it is built by pairs
of inherited genes, each chosen by chance from one of its parent's
such pairs. Then, during the early development of each brain, many
other smaller details depend on other, small accidental events. An
engineer might wonder how such machines could possibly work, in spite
of so many possible variations.

To explain how such large systems could function reliably, quite a few
thinkers have suggested that our brains must be based on some
not-yet-understood 'holistic' principles, according to which every
fragment of process or knowledge is 'distributed' (in some unknown
global way) so that the system still could function well in spite of
the loss of any part of it because such systems act as though they
were more than the sums of all their parts. However, the arguments
in this book suggest that we do not need to look for any such magical
tricks—because we have so many ways to accomplish each job that we can
tolerate the failure of many particular parts, simply by switching to
using alternative ones. (In other words, we function well because we
can perform with far less than the sum of all of our parts.)

Furthermore, it makes sense to suppose that many of the parts of our
brains are involved with helping to correct or suppress the effects of
defects and bugs in other parts. This means that we will find it hard
to 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

...

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.
...

BillK
I think you've got a time inversion here.  The list of reasons to go 
ahead is frequently, or even usually, created AFTER the action has been 
done.  If the list is being created BEFORE the decision, the list of 
reasons not to go ahead isn't ignored.  Both lists are weighed, a 
decision is made, and AFTER the decision is made the reasons decided 
against have their weights reduced.  If, OTOH, the decision is made 
BEFORE the list of reasons is created, then the list doesn't *get* 
created until one starts trying to justify the action, and for 
justification obviously reasons not to have done the thing are 
useless...except as a layer of whitewash to prove that all 
eventualities were considered.


For most decisions one never bothers to verbalize why it was, or was 
not, done.



P.S.:  ...and AFTER the decision is made the reasons decided against 
have their weights reduced.  ...:  This is to reinforce a consistent 
self-image.  If, eventually, the decision turns our to have been the 
wrong one, then this must be revoked, and the alternative list 
reinforced.  At which point one's self-image changes and one says things 
like I don't know WHY I would have done that, because the modified 
self image would not have decided in that way.
P.P.S:  THIS IS FABULATION.  I'm explaining what I think happens, but I 
have no actual evidence of the truth of my assertions.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.
 ...

 BillK
I think you've got a time inversion here.  The list of reasons to go
ahead is frequently, or even usually, created AFTER the action has been
done.  If the list is being created BEFORE the decision, the list of
reasons not to go ahead isn't ignored.  Both lists are weighed, a
decision is made, and AFTER the decision is made the reasons decided
against have their weights reduced.  If, OTOH, the decision is made
BEFORE the list of reasons is created, then the list doesn't *get*
created until one starts trying to justify the action, and for
justification obviously reasons not to have done the thing are
useless...except as a layer of whitewash to prove that all
eventualities were considered.

For most decisions one never bothers to verbalize why it was, or was
not, done.



No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...
 


No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK
I would say that all decisions are made subconsciously, but that the 
conscious mind can focus attention onto various parts of the problem and 
possibly affect the weighings of the factors.


I would also make a distinction between the conscious mind and the 
verbalized elements, which are merely the story that the conscious mind 
is telling.  (And assert that ALL of the stories that we tell ourselves 
are human conceits, i.e., abstractions of parts deemed significant out 
of a much more complex underlying process.)


I've started reading What is Thought by Eric Baum.  So far I'm only 
into the second chapter, but it seems quite promising.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

 Matt Maohoney wrote:
  My point is that when AGI is built, you will have to trust its answers
 based
  on the correctness of the learning algorithms, and not by examining the
  internal data or tracing the reasoning.
 
 Agreed...
 
 I believe this is the fundamental
  flaw of all AI systems based on structured knowledge representations, such
 as
  first order logic, frames, connectionist systems, term logic, rule based
  systems, and so on.
 
 I have a few points in response to this:
 
 1) Just because a system is based on logic (in whatever sense you
 want to interpret that phrase) doesn't mean its reasoning can in
 practice be traced by humans.  As I noted in recent posts,
 probabilistic logic systems will regularly draw conclusions based on
 synthesizing (say) tens of thousands or more weak conclusions into one
 moderately strong one.  Tracing this kind of inference trail in detail
 is pretty tough for any human, pragmatically speaking...
 
 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I see your point that there is no sharp boundary between structured knowledge
and statistical approaches.  What I mean is that the normal software
engineering practice of breaking down a hard problem into components with well
defined interfaces does not work for AGI.  We usually try things like:

input text -- parser -- semantic extraction -- inference engine -- output
text.

The fallacy is believing that the intermediate representation would be more
comprehensible than the input or output.  That isn't possible because of the
huge amount of data.  In a toy system you might have 100 facts that you can
compress down to a diagram that fits on a sheet of paper.  In reality you
might have a gigabyte of text that you can compress down to 10^9 bits. 
Whatever form this takes can't be more comprehensible than the input or output
text.

I think it is actually liberating to remove the requirement for transparency
that was typical of GOFAI.  For example, your knowledge representation could
still be any of the existing forms but it could also be a huge matrix with
billions of elements.  But it will require a different approach to build, not
so much engineering, but more of an experimental science, where you test
different learning algoriths at the inputs and outputs only.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans.  You argued that he 
could

have understood it if he tried harder.


   No, I gave five separate alternatives most of which put the blame on the 
system for not being able to compress it's data pattern into knowledge and 
explain it to Philip.  As I keep saying (and am trying to better rephrase 
here), the problem with statistical and similar systems is that they 
generally don't pick out and isolate salient features (unless you are lucky 
enough to have constrained them to exactly the correct number of variables). 
Since they don't pick out and isolate features, they are not able to build 
upon what they do.



I disagreed and argued that an
explanation would be useless even if it could be understood.


   In your explanation, however, you basically *did* explain exactly what 
the system did.  Clearly, the intrusion detection system looks at a number 
of variables and if the weighted sum exceeds a threshold, it decides that it 
is likely an intruder.  The only real question is the degree of entanglement 
of the variables in the real world.  It is *possible*, though I would argue 
extremely unlikely, that the variables really are entangled enough in the 
real world that a human being couldn't be trained to do intrusion detection. 
It is much, much, *MUCH* more probable that the system has improperly 
entangled the variables because it has too many degrees of freedom.


If you use a computer to add up a billion numbers, do you check the math, 
or

do you trust it to give you the right answer?


I trust it to give me the right answer because I know and understand exactly 
what it is doing.


My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.


The problems are that 1) correct learning algorithms will give bad results 
if given bad data *and* 2) how are you ensuring that your learning 
algorithms are correct under all of the circumstances that you're using 
them?



I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such 
as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.  The evidence supporting my assertion is:
1. The relative success of statistical models vs. structured knowledge.


Statistical models are successful at pattern-matching and recognition.  I am 
not aware of *anything* else that they are successful at.  I am fully aware 
of Jeff Hawkins' contention that pattern-matching is the only thing that the 
brain does but I would argue that that pattern-matching includes features 
extraction and knowledge compression, that current statistical AI models do 
not, and that that is why current statistical models are anything but AI.


Straight statistical models like you are touting are never going to get you 
to AI until you can successfully build them on top of each other -- and to 
do that, you need feature extraction and thus explainability.  An AGI is 
certainly going to use statistics for feature extraction, etc. but knowledge 
is *NOT* going to be kept in raw, badly entangled statistical form (i.e. 
basically compressed data rather than knowledge).  If you were to add 
functionality to a statistical system such that it could extract features 
and use that to explain it's results, then I would say that it is on the way 
to AGI.  The point is that your statistical systems can't correctly explain 
their results even to an unlimited being (because most of the time they are 
incorrectly entangled anyways).



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 11:11 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



Mark,

Philip Goetz gave an example of an intrusion detection system that learned
information that was not comprehensible to humans.  You argued that he 
could

have understood it if he tried harder.  I disagreed and argued that an
explanation would be useless even if it could be understood.

If you use a computer to add up a billion numbers, do you check the math, 
or

do you trust it to give you the right answer?

My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.  I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such 
as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.  The evidence supporting my assertion is:

1. The relative success of statistical models vs. structured knowledge.
2. Arguments based on algorithmic complexity.  (The brain cannot model a 
more

complex machine).
3. The two examples above

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:

 Philip Goetz gave an example of an intrusion detection system that learned
 information that was not comprehensible to humans.  You argued that he
 could
 have understood it if he tried harder.

No, I gave five separate alternatives most of which put the blame on the
system for not being able to compress it's data pattern into knowledge and
explain it to Philip.


But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!

Consider the case of mathematical proof.  Given a tricky theorem to
prove, I can show students the correct approach.  But my knowledge of
**why** I take the strategy I do, is a lot tougher to communicate.
Most of advanced math education is about learning by example -- you
show the student a bunch of proofs and hope they pick up the spirit of
how to prove stuff in various domains.  Explicitly articulating and
explaining knowledge about how to prove is hard...

The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it

OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...

So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.

However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Ben,

   I agree with the vast majority of what I believe that you mean but . . .


1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...


However, if the system could say to the human, I've got hundred thousand 
separate cases from which I've extracted six hundred twenty two variables 
which each increase the probability of x by half a percent to one percent 
individually and several of them are positively entangled and only two are 
negatively entangled (and I can even explain the increase in probability in 
64% of the cases via my login subroutines) . . . . wouldn't it be pretty 
easy for the human to debug anything with the system's assistance?  The fact 
that humans are slow and eventually capacity-limited has no bearing on my 
argument that a true AGI is going to have to be able to explain itself (if 
only to itself).


The only real case where a human couldn't understand the machine's reasoning 
in a case like this is where there are so many entangled variables that the 
human can't hold them in comprehension -- and I'll continue my contention 
that this case is rare enough that it isn't going to be a problem for 
creating an AGI.


My only concern with systems of this type is where the weak conclusions are 
unlabeled and unlabelable and thus may be a result of incorrectly 
over-fitting questionable data and creating too many variables and degrees 
and freedom and thus not correctly serving to predict new cases . . . . 
(i.e. the cases where the system's explanation is wrong).



2) IMO the dichotomy between logic based and statistical AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)


I think that I know what you mean but I would phrase this *very* 
differently.  I would phrase it that an AGI is going to have to be able to 
perform both logic-based and statistical operations and that any AGI which 
is limited to one of the two is doomed to failure.  If you can contort 
statistics to effectively do logic or logic to effectively do statistics, 
then you're fine -- but I really don't see it happening.  I also am becoming 
more and more aware of how much feature extraction and isolation is critical 
to my view of AGI.





- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 11:30 PM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis



Matt Maohoney wrote:
My point is that when AGI is built, you will have to trust its answers 
based

on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.


Agreed...


I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, 
such as

first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.


I have a few points in response to this:

1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...

2) IMO the dichotomy between logic based and statistical AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)

For example, show me how a statistical procedure learning system is
going to learn how to carry out complex procedures involving
recursion.  Sure, it can be done -- but it's going to involve
introducing structures/dynamics that are accurately describable as
versions/manifestations of logic.

Or, show me how a logic based system is going to handle large masses
of uncertain data, as comes in from perception.  It can be done in
many ways -- but all of them involve introducing structures/dynamics
that are accurately describable as statistical.

Probabilistic inference in Novamente includes

-- higher-order inference that works somewhat like standard term and
predicate logic
-- first-order

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

Hi,


The only real case where a human couldn't understand the machine's reasoning
in a case like this is where there are so many entangled variables that the
human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem for
creating an AGI.


Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I think that I know what you mean but I would phrase this *very*
differently.  I would phrase it that an AGI is going to have to be able to
perform both logic-based and statistical operations and that any AGI which
is limited to one of the two is doomed to failure.  If you can contort
statistics to effectively do logic or logic to effectively do statistics,
then you're fine -- but I really don't see it happening.


My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?  If the
inference step is a just a Bayes rule step, then arguably it's just
statistics.  If the inference step is a variable unification step,
then arguably it's logic, with a little guidance from statistics on
the inference control side.  Partitioning cognition up into logic
versus statistics is not IMO very useful.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


We're reaching the point of agreeing to disagree except . . . .

Are you really saying that nearly all of your decisions can't be explained 
(by you)?



My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?


It's logical operations whose choice points are controlled by statistical 
operations.:-)  Whether the operations can be shoved into the categories 
depends upon how far you break them down.


And I think that our point is the same, that both logic and statistics (or 
elements from each) are required.:-)


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 11:21 AM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis



Hi,

The only real case where a human couldn't understand the machine's 
reasoning
in a case like this is where there are so many entangled variables that 
the

human can't hold them in comprehension -- and I'll continue my contention
that this case is rare enough that it isn't going to be a problem for
creating an AGI.


Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)


 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I think that I know what you mean but I would phrase this *very*
differently.  I would phrase it that an AGI is going to have to be able 
to
perform both logic-based and statistical operations and that any AGI 
which

is limited to one of the two is doomed to failure.  If you can contort
statistics to effectively do logic or logic to effectively do statistics,
then you're fine -- but I really don't see it happening.


My point is different than yours.  I believe that the most essential
cognitive operations have aspects of what we typically label logic
and statistics, but don't easily get shoved into either of these
categories.  An example is Novamente's probabilistic inference engine
which carries out operations with the general form of logical
inference steps, but guided at every step by statistically gathered
knowledge via which series of inference steps have proved viable in
prior related contexts.  Is this logic or statistics?  If the
inference step is a just a Bayes rule step, then arguably it's just
statistics.  If the inference step is a variable unification step,
then arguably it's logic, with a little guidance from statistics on
the inference control side.  Partitioning cognition up into logic
versus statistics is not IMO very useful.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

We're reaching the point of agreeing to disagree except . . . .

Are you really saying that nearly all of your decisions can't be explained
(by you)?


Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS

One of Nietzsche's many nice quotes is (paraphrased): Consciousness
is like the army commander who takes responsibility for the
largely-autonomous actions of his troops.

Recall also Gazzaniga's work on split-brain patients, for insight into
the illusionary nature of many human explanations of reasons for
actions.

The process of explaining why we have done what we have done is an
important aspect of human intelligence -- but not because it is
accurate, it almost never is  More because this sort of
storytelling helps us to structure our future actions (though
generally in ways we cannot accurately understand or explain ;-)

Some of the discussion here is relevant

http://www.goertzel.org/dynapsyc/2004/FreeWill.htm

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!



Explicitly articulating and
explaining knowledge about how to prove is hard...


:-)  And your point is?:-)

Yes, compressing one's knowledge into comprehensible form for communication 
to others is *very* hard.  On the other hand, can you say that you really 
understand something if you can't explain it?  Or, alternatively, can you 
really use the knowledge to it's fullest extent, if you don't understand it 
well enough to be able to explain it.



The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it


Yes.  Again, I agree.  And your point is?  Sometimes we *are* just stupid 
reflexive (or pattern-matching) machines.  At those moments, we aren't 
intelligent.



OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...


Yes, and not so oddly enough, our ability to explain is very highly 
correlated with that purported measure of intelligence called the IQ.



So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.


:-)  I never, ever argued that it couldn't serve as part of an AGI -- just 
not be the entire core.  I expect many peripheral senses and other low-level 
input processors to employ pattern-matching and statistical algorithms.



However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...


Which translated into English says that Novamente will be able to explain 
itself -- thus putting itself into my potential AGI camp, not the dead-end 
statistical-only camp.



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis



On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
 Philip Goetz gave an example of an intrusion detection system that 
 learned

 information that was not comprehensible to humans.  You argued that he
 could
 have understood it if he tried harder.

No, I gave five separate alternatives most of which put the blame on 
the
system for not being able to compress it's data pattern into knowledge 
and

explain it to Philip.


But Mark, as a former university professor I can testify as to the
difficulty of compressing one's knowledge into comprehensible form for
communication to others!!

Consider the case of mathematical proof.  Given a tricky theorem to
prove, I can show students the correct approach.  But my knowledge of
**why** I take the strategy I do, is a lot tougher to communicate.
Most of advanced math education is about learning by example -- you
show the student a bunch of proofs and hope they pick up the spirit of
how to prove stuff in various domains.  Explicitly articulating and
explaining knowledge about how to prove is hard...

The point is, humans are sometimes like these simplistic machine
learning algorithms, in terms of being able to do stuff and **not**
articulate how we do it

OTOH we do have a process of turning our implicit know-how into
declarative knowledge for communication to others.  It's just that
this process is sometimes very ineffective ... its effectiveness
varies a lot by domain, as well as according to many other factors...

So I agree that this sort of machine learning algorithm that can only
do, but not explain, is not an AGI  but I don't agree that it
can't serve as part of an AGI.

However, one thing we have tried to do in Novamente is to specifically
couple a declarative reasoning component with a machine learning
style procedural learning component, in such a way that the opaque
procedures learned by the latter can -- if the system chooses to
expend resources on such -- be tractably converted into the form
utilized by the former...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, nearly all 
decisions are made as reflexes or pattern-matchings on what is effectively 
compiled knowledge; however, it is the structuring of future actions that make 
us the learning, intelligent entities that we are.

 The process of explaining why we have done what we have done is an
 important aspect of human intelligence -- but not because it is
 accurate, it almost never is  More because this sort of
 storytelling helps us to structure our future actions (though
 generally in ways we cannot accurately understand or explain ;-)

Explaining our actions is the reflective part of our minds evaluating the 
reflexive part of our mind.  The reflexive part of our minds, though, operates 
analogously to a machine running on compiled code with the compilation of code 
being largely *not* under the control of our conscious mind (though some degree 
of this *can* be changed by our conscious minds).  The more we can correctly 
interpret and affect/program the reflexive part of our mind with the reflective 
part, the more intelligent we are.  And, translating this back to the machine 
realm circles back to my initial point, the better the machine can explain it's 
reasoning and use it's explanation to improve it's future actions, the more 
intelligent the machine is (or, in reverse, no explanation = no intelligence).

- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 12:17 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis


 We're reaching the point of agreeing to disagree except . . . .

 Are you really saying that nearly all of your decisions can't be explained
 (by you)?
 
 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS
 
 One of Nietzsche's many nice quotes is (paraphrased): Consciousness
 is like the army commander who takes responsibility for the
 largely-autonomous actions of his troops.
 
 Recall also Gazzaniga's work on split-brain patients, for insight into
 the illusionary nature of many human explanations of reasons for
 actions.
 
 The process of explaining why we have done what we have done is an
 important aspect of human intelligence -- but not because it is
 accurate, it almost never is  More because this sort of
 storytelling helps us to structure our future actions (though
 generally in ways we cannot accurately understand or explain ;-)
 
 Some of the discussion here is relevant
 
 http://www.goertzel.org/dynapsyc/2004/FreeWill.htm
 
 -- Ben
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, nearly
all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the structuring of future
actions that make us the learning, intelligent entities that we are.

...

Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our conscious
mind (though some degree of this *can* be changed by our conscious minds).
The more we can correctly interpret and affect/program the reflexive part of
our mind with the reflective part, the more intelligent we are.


Mark, let me try to summarize in a nutshell the source of our disagreement.

You partition intelligence into

* explanatory, declarative reasoning

* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser

You partition intelligence into
* explanatory, declarative reasoning
* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...


Hmmm.  I will certainly agree that most long-term unconscious thinking is 
actually closer to conscious thinking than most people believe (with the 
only real difference being that there isn't a self-reflective overseer --  
or, at least, not one whose memories we can access).


But -- I don't partition intelligence that way.  I see those as two 
endpoints with a continuum between them (or, a lot of low-level transparent 
switching between the two).


We certainly do have a disagreement in terms of the quantity of knowledge 
that is *in real time* actually behind a decision (as opposed to compiled 
knowledge) -- Me being in favor of mostly compiled knowledge and you being 
in favor of constantly using all of the data.


But I'm not at all sure how important that difference is . . . .  With the 
brain being a massively parallel system, there isn't necessarily a huge 
advantage in compiling knowledge (I can come up with both advantages and 
disadvantages) and I suspect that there are more than enough surprises that 
we have absolutely no way of guessing where on the spectrum of compilation 
vs. not the brain actually is.


On the other hand, I think that lack of compilation is going to turn out to 
be a *very* severe problem for non-massively parallel systems



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:00 PM
Subject: Re: Re: Re: Re: Re: [agi] A question on the symbol-system 
hypothesis




 Well, of course they can be explained by me -- but the acronym for
 that sort of explanation is BS

I take your point with important caveats (that you allude to).  Yes, 
nearly

all decisions are made as reflexes or pattern-matchings on what is
effectively compiled knowledge; however, it is the structuring of future
actions that make us the learning, intelligent entities that we are.

...

Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our 
conscious
mind (though some degree of this *can* be changed by our conscious 
minds).
The more we can correctly interpret and affect/program the reflexive part 
of

our mind with the reflective part, the more intelligent we are.


Mark, let me try to summarize in a nutshell the source of our 
disagreement.


You partition intelligence into

* explanatory, declarative reasoning

* reflexive pattern-matching (simplistic and statistical)

Whereas I think that most of what happens in cognition fits into
neither of these categories.

I think that most unconscious thinking is far more complex than
reflexive pattern-matching --- and in fact has more in common with
explanatory, deductive reasoning than with simple pattern-matching;
the difference being that it deals with large masses of (often highly
uncertain) knowledge rather than smaller amounts of guessed to be
highly important knowledge...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Ben Goertzel

But I'm not at all sure how important that difference is . . . .  With the
brain being a massively parallel system, there isn't necessarily a huge
advantage in compiling knowledge (I can come up with both advantages and
disadvantages) and I suspect that there are more than enough surprises that
we have absolutely no way of guessing where on the spectrum of compilation
vs. not the brain actually is.


Neuroscience makes clear that most of human long-term memory is
actually constructive and inventive rather than strictly recollective,
see e.g. Israel Rosenfield's nice book The Invention of Memory

www.amazon.com/ Invention-Memory-New-View-Brain/dp/0465035922

as well as a lot of more recent research  So the knowledge that is
compiled in the human brain, is compiled in a way that assumes
self-organizing and creative cognitive processes will be used to
extract and apply it...

IMO in an AGI system **much** knowledge must also be stored/retrieved
in this sort of way (where retrieval is construction/invention).  But
AGI's will also have more opportunity than the normal human brain to
use idiot-savant-like precise computer-like memory when
appropriate...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Philip Goetz

On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote:

 This sounds very Searlian.  The only test you seem to be referring to
 is the Chinese Room test.

You misunderstand.  The test is being able to form cognitive structures that
can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-04 Thread Mark Waser
You misunderstand.  The test is being able to form cognitive structures 
that

can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.


   Show me an example of where/how your pattern matcher uses the cognitive 
structures it derives as a basis for future, more complicated cognitive 
structures.  (My assumption is that) There is no provision for that in your 
code and that the system is too simple for it to evolve spontaneously.  Are 
you actually claiming that your system does form cognitive structures that 
can serve as the basis for later more complicated cognitive structures?


   Why do you keep throwing around the Searlian buzzword/pejorative? 
Previous discussions on this mailing list have made it quite clear that the 
people on this list don't even agree on what it means much less what it's 
implications are . . . .


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote:

 This sounds very Searlian.  The only test you seem to be referring to
 is the Chinese Room test.

You misunderstand.  The test is being able to form cognitive structures 
that

can serve as the basis for later more complicated cognitive structures.
Your pattern matcher does not do this.


It doesn't?  How do you know?  Unless you are a Searlian.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Philip Goetz

On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:

A nice story but it proves absolutely nothing . . . . .


It proves to me that there is no point in continuing this debate.


Further, and more importantly, the pattern matcher *doesn't* understand it's
results either and certainly could build upon them -- thus, it *fails* the
test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.


This sounds very Searlian.  The only test you seem to be referring to
is the Chinese Room test.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Charles D Hixson

Mark Waser wrote:

Hi Bill,

...
   If storage and access are the concern, your own argument says that 
a sufficiently enhanced human can understand anything and I am at a 
loss as to why an above-average human with a computer and computer 
skills can't be considered nearly indefinitely enhanced.
The use of external aids doesn't allow one to increase the size of 
active ram.  Usually this is no absolute barrier, though it can result 
in exponential slowdown.  Sometimes, however, I suspect that there are 
problems that can't be addressed because the working memory is too 
small.  This isn't a thing that I could prove (and probably von Neuman 
proved otherwise).  So take exponential slowdown to be what's involved, 
though it might be combinatorial slowdown for some classes of problems.  
This may not be an absolute barrier, but it is sufficient to effectively 
be called one, especially given the expected lifetime of the person 
involved.  (After one has lived a few thousand years, one might perceive 
this class of problems to be more tractable...but I'd bet they will be 
addressed sooner by other means.)


Consider that we apparently have special purpose hardware for rotating 
visual images.  Given that, there MUST be a limit to the resolution that 
this hardware possesses.  (Well, I suspect that it rotates vectorized 
images, and retranslates after rotation...but SOME pixelated image is 
being rotated (they've watched it on PET[?] scans).  This implies that 
anything that requires more than that much detail to handle is fudged, 
or just isn't handled.  So the necessary enhancement would:

1) off-load the original image
2) rotate it, and
3) import the rotated image
Plausibly importation could be done via a 3-D monitor, though it might 
take a lot of study.  Exporting the original uncorrupted image, however, 
is beyond the current state of the art.


I would argue that this is but one of a large class of problems that 
cannot be addressed by the current modes of enhancement.


   Regarding chess or Go masters -- while you couldn't point to a move 
and say you shouldn't have done that, I'm sure that the master could 
(probably in several instances) point to a move and say I wouldn't 
have done that and provided a better move (most often along with a 
variable-quality explanation of why it was a better move).

...   Mark

- Original Message - From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Mark Waser

This sounds very Searlian.  The only test you seem to be referring to
is the Chinese Room test.


You misunderstand.  The test is being able to form cognitive structures that 
can serve as the basis for later more complicated cognitive structures. 
Your pattern matcher does not do this.


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 9:17 AM
Subject: Re: [agi] A question on the symbol-system hypothesis



On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:

A nice story but it proves absolutely nothing . . . . .


It proves to me that there is no point in continuing this debate.

Further, and more importantly, the pattern matcher *doesn't* understand 
it's
results either and certainly could build upon them -- thus, it *fails* 
the

test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.


This sounds very Searlian.  The only test you seem to be referring to
is the Chinese Room test.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Matt Mahoney
Mark,

Philip Goetz gave an example of an intrusion detection system that learned 
information that was not comprehensible to humans.  You argued that he could
have understood it if he tried harder.  I disagreed and argued that an
explanation would be useless even if it could be understood.

If you use a computer to add up a billion numbers, do you check the math, or
do you trust it to give you the right answer?

My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.  I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such as
first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.  The evidence supporting my assertion is:

1. The relative success of statistical models vs. structured knowledge.
2. Arguments based on algorithmic complexity.  (The brain cannot model a more
complex machine).
3. The two examples above.

I'm afraid that's all the arguments I have.  Until we build AGI, we really
won't know.  I realize I am repeating (summarizing) what I have said before. 
If you want to tear down my argument line by line, please do it privately
because I don't think the rest of the list will be interested.

--- Mark Waser [EMAIL PROTECTED] wrote:

 Matt,
 
 Why don't you try addressing my points instead of simply repeating 
 things that I acknowledged and answered and then trotting out tired old red 
 herrings.
 
 As I said, your network intrusion anomaly detector is a pattern matcher.
 
 It is a stupid pattern matcher that can't explain it's reasoning and can't 
 build upon what it has learned.
 
 You, on the other hand, gave a very good explanation of how it works. 
 Thus, you have successfully proved that you are an explaining intelligence 
 and it is not.
 
 If anything, you've further proved my point that an AGI is going to have
 
 to be able to explain/be explained.
 
 
 - Original Message - 
 From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Saturday, December 02, 2006 5:17 PM
 Subject: Re: [agi] A question on the symbol-system hypothesis
 
 
 
  --- Mark Waser [EMAIL PROTECTED] wrote:
 
  A nice story but it proves absolutely nothing . . . . .
 
  I know a little about network intrusion anomaly detection (it was my
  dissertation topic), and yes it is an important lessson.
 
  Network traffic containing attacks has a higher algorithmic complexity 
  than
  traffic without attacks.  It is less compressible.  The reason has nothing
 
  to
  do with the attacks, but with arbitrary variations in protocol usage made 
  by
  the attacker.  For example, the Code Red worm fragments the TCP stream 
  after
  the HTTP GET command, making it detectable even before the buffer 
  overflow
  code is sent in the next packet.  A statistical model will learn that this
 
  is
  unusual (even though legal) in normal HTTP traffic, but offer no 
  explanation
  why such an event should be hostile.  The reason such anomalies occur is
  because when attackers craft exploits, they follow enough of the protocol 
  to
  make it work but often don't care about the undocumented conventions 
  followed
  by normal servers and clients.  For example, they may use lower case 
  commands
  where most software uses upper case, or they may put unusual but legal 
  values
  in the TCP or IP-ID fields or a hundred other things that make the attack
  stand out.  Even if they are careful, many exploits require unusual 
  commands
  or combinations of options that rarely appear in normal traffic and are
  therefore less carefully tested.
 
  So my point is that it is pointless to try to make an anomaly detection 
  system
  explain its reasoning, because the only explanation is that the traffic is
  unusual.  The best you can do is have it estimate the probability of a 
  false
  alarm based on the information content.
 
  So the lesson is that AGI is not the only intelligent system where you 
  should
  not waste your time trying to understand what it has learned.  Even if you
  understood it, it would not tell you anything.  Would you understand why a
  person made some decision if you knew the complete state of every neuron 
  and
  synapse in his brain?
 
 
  You developed a pattern-matcher.  The pattern matcher worked (and I would
  dispute that it worked better than it had a right to).  Clearly, you do
  not understand how it worked.  So what does that prove?
 
  Your contention (or, at least, the only one that continues the previous
  thread) seems to be that you are too stupid to ever understand the 
  pattern
  that it found.
 
  Let me offer you several alternatives:
  1)  You missed something obvious
  2)  You would have understood it if the system could have explained it to
  you
  3)  You would have understood it if the system had managed to losslessly
  convert

Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-03 Thread Ben Goertzel

Matt Maohoney wrote:

My point is that when AGI is built, you will have to trust its answers based
on the correctness of the learning algorithms, and not by examining the
internal data or tracing the reasoning.


Agreed...


I believe this is the fundamental
flaw of all AI systems based on structured knowledge representations, such as
first order logic, frames, connectionist systems, term logic, rule based
systems, and so on.


I have a few points in response to this:

1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans.  As I noted in recent posts,
probabilistic logic systems will regularly draw conclusions based on
synthesizing (say) tens of thousands or more weak conclusions into one
moderately strong one.  Tracing this kind of inference trail in detail
is pretty tough for any human, pragmatically speaking...

2) IMO the dichotomy between logic based and statistical AI
systems is fairly bogus.  The dichotomy serves to separate extremes on
either side, but my point is that when a statistical AI system becomes
really serious it becomes effectively logic-based, and when a
logic-based AI system becomes really serious it becomes effectively
statistical ;-)

For example, show me how a statistical procedure learning system is
going to learn how to carry out complex procedures involving
recursion.  Sure, it can be done -- but it's going to involve
introducing structures/dynamics that are accurately describable as
versions/manifestations of logic.

Or, show me how a logic based system is going to handle large masses
of uncertain data, as comes in from perception.  It can be done in
many ways -- but all of them involve introducing structures/dynamics
that are accurately describable as statistical.

Probabilistic inference in Novamente includes

-- higher-order inference that works somewhat like standard term and
predicate logic
-- first-order probabilistic inference that combines various heuristic
probabilistic formulas with distribution-fitting and so forth .. i.e.
statistical inference wrappedin a term logic framework...

It violates the dichotomy you (taking your cue from the standard
literature) propose/perpetuate  But it is certainly not the only
possible system to do so.

3) Anyway, trashing logic incorporating AI systems based on the
failings of GOFAI is sorta like trashing neural net systems based on
the failings of backprop, or trashing statistical learning systems
based on the failings of linear discriminant analysis or linear
regression.

Ruling out vast classes of AI approaches based on what (vaguely
defined) terms they have associated with them (logic, statistics,
neural net) doesn't seem like a good idea to me.   Because I feel
that all these standard paradigms have some element of correctness and
some element of irrelevance/incorrectness to them, and any one of them
could be grown into a working AGI approach -- but, in the course of
this growth process, the apparent differences btw these various
approaches will inevitably be overcome and the deeper parallels made
more apparent...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Mark Waser

A nice story but it proves absolutely nothing . . . . .

You developed a pattern-matcher.  The pattern matcher worked (and I would 
dispute that it worked better than it had a right to).  Clearly, you do 
not understand how it worked.  So what does that prove?


Your contention (or, at least, the only one that continues the previous 
thread) seems to be that you are too stupid to ever understand the pattern 
that it found.


Let me offer you several alternatives:
1)  You missed something obvious
2)  You would have understood it if the system could have explained it to 
you
3)  You would have understood it if the system had managed to losslessly 
convert it into a more compact (and comprehensible) format
4)  You would have understood it if the system had managed to losslessly 
convert it into a more compact (and comprehensible) format and explained it 
to your
5)  You would have understood it if the system had managed to lossily 
convert it into a more compact (and comprehensible -- and probably even, 
more correct) format
6)  You would have understood it if the system had managed to lossily 
convert it into a more compact (and comprehensible -- and probably even, 
more correct) format and explained it to you


My contention is that the pattern that it found was simply not translated 
into terms you could understand and/or explained.


Further, and more importantly, the pattern matcher *doesn't* understand it's 
results either and certainly could build upon them -- thus, it *fails* the 
test as far as being the central component of an RSIAI or being able to 
provide evidence as to the required behavior of such.


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, December 01, 2006 7:02 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems, however, the representation is more 
vector-like
and *not* conducive to easy translation to human terms.  I have two 
answers

to these cases.  Answer 1 is that it is still easy for a human to look at
the closest matches to a particular word pair and figure out what they 
have

in common.


I developed an intrusion-detection system for detecting brand new
attacks on computer systems.  It takes TCP connections, and produces
100-500 statistics on each connection.  It takes thousands of
connections, and runs these statistics thru PCA to come up with 5
dimensions.  Then it clusters each connection, and comes up with 1-3
clusters per port that have a lot of connections and are declared to
be normal traffic.  Those connections that lie far from any of those
clusters are identified as possible intrusions.

The system worked much better than I expected it to, or than it had a
right to.  I went back and, by hand, tried to figure out how it was
classifying attacks.  In most cases, my conclusion was that there was
*no information available* to tell whether a connection was an attack,
because the only information to tell that a connection was an attack
was in the TCP packet contents, while my system looked only at packet
headers.  And yet, the system succeeded in placing about 50% of all
attacks in the top 1% of suspicious connections.  To this day, I don't
know how it did it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread BillK

On 12/2/06, Mark Waser wrote:


My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.

Further, and more importantly, the pattern matcher *doesn't* understand it's
results either and certainly could build upon them -- thus, it *fails* the
test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.



Mark, I think you are making two very basic wrong assumptions.

1) That humans are able to understand everything if it is explained to
them simply enough and they are given unlimited time.

2) That it is even possible to explain some very complex ideas in a
simple enough fashion.

Consider teaching the sub-normal. After much repetition they can be
trained to do simple tasks. Not understanding 'why', but they can
remember instructions eventually. Even high IQ humans have the same
equipment, just a bit better. They still have limits to how much they
can remember, how much information they can hold in their heads and
access. If you can't remember all the factors at once, then you can't
understand the result. You can write down the steps, all the different
data that affect the result, but you can't assemble it in your brain
to get a result.

And I think the chess or Go examples are a good example. People who
think that they can look through the game records and understand why
they lost are just not trained chess or go players. They have a good
reason to call some people 'Go masters' or 'chess masters'. I used to
play competitive chess and I can assure you that when our top board
player consistently beat us lesser mortals we could rarely point at
move 23 and say 'we shouldn't have done that'. It is *far* more subtle
than that. If you think you can do that, then you just don't
understand the problem.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Mark Waser

Hi Bill,

   An excellent reply to my post since it gives me good points to directly 
respond to . . . .


   I am not making the two assumptions that you list in the absolute sense 
although I am making them in the practical sense (which turns out to be a 
very important difference).  Let me explain . . . .


   We are debating what is necessary for AGI.  I am certainly contending 
that any idea that is necessary for AGI is not too complicated for an 
ordinary human to understand.


   I am also contending that, while there may be ideas that are too 
difficult for humans to comprehend, that the world is messy enough and 
variable-interlinked enough that we currently don't have the data that would 
allow a system to find such a concept (nor a system that would truly 
*understand* such a concept -- using understand in the sense of being able 
to build upon it).  If you wanted to debate this latter point with me by 
saying that Google has sufficient data, I wouldn't want to argue the point 
except to say that Google really can't use the data to build upon.


   There's also the argument that humans are not limited to what's 
currently in their working memory.  When I am doing system design and am 
working at the top level, I can only keep the major salient features of the 
subsystems in mind.  Then, I go through each of the subsystems individually 
and see if they indicate that I should re-evaluate any decisions made at the 
top level.  And you continue down through the levels . . . .  With proper 
encapsulation, etc., this always works.  It is not necessarily optimal but 
it is certainly functional.  If I use paper and other outside assistance, I 
can do even more.


   If storage and access are the concern, your own argument says that a 
sufficiently enhanced human can understand anything and I am at a loss as to 
why an above-average human with a computer and computer skills can't be 
considered nearly indefinitely enhanced.


   Regarding chess or Go masters -- while you couldn't point to a move and 
say you shouldn't have done that, I'm sure that the master could (probably 
in several instances) point to a move and say I wouldn't have done that 
and provided a better move (most often along with a variable-quality 
explanation of why it was a better move).


   I consider all of this as an engineering problem rather than a science 
problem.  Yes, my bridge isn't going to hold up near a black hole, but it is 
certainly sufficient for near-human conditions.


   Mark

- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



On 12/2/06, Mark Waser wrote:


My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.

Further, and more importantly, the pattern matcher *doesn't* understand 
it's
results either and certainly could build upon them -- thus, it *fails* 
the

test as far as being the central component of an RSIAI or being able to
provide evidence as to the required behavior of such.



Mark, I think you are making two very basic wrong assumptions.

1) That humans are able to understand everything if it is explained to
them simply enough and they are given unlimited time.

2) That it is even possible to explain some very complex ideas in a
simple enough fashion.

Consider teaching the sub-normal. After much repetition they can be
trained to do simple tasks. Not understanding 'why', but they can
remember instructions eventually. Even high IQ humans have the same
equipment, just a bit better. They still have limits to how much they
can remember, how much information they can hold in their heads and
access. If you can't remember all the factors at once, then you can't
understand the result. You can write down the steps, all the different
data that affect the result, but you can't assemble it in your brain
to get a result.

And I think the chess or Go examples are a good example. People who
think that they can look through the game records and understand why
they lost are just not trained chess or go players. They have a good
reason to call some people 'Go masters' or 'chess masters'. I used to
play competitive chess and I can assure you that when our top board
player consistently beat us lesser mortals we could rarely point at
move 23 and say 'we shouldn't have done that'. It is *far* more subtle
than that. If you think you can do that, then you just don't
understand the problem.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-02 Thread Matt Mahoney

--- Mark Waser [EMAIL PROTECTED] wrote:

 A nice story but it proves absolutely nothing . . . . .

I know a little about network intrusion anomaly detection (it was my
dissertation topic), and yes it is an important lessson.

Network traffic containing attacks has a higher algorithmic complexity than
traffic without attacks.  It is less compressible.  The reason has nothing to
do with the attacks, but with arbitrary variations in protocol usage made by
the attacker.  For example, the Code Red worm fragments the TCP stream after
the HTTP GET command, making it detectable even before the buffer overflow
code is sent in the next packet.  A statistical model will learn that this is
unusual (even though legal) in normal HTTP traffic, but offer no explanation
why such an event should be hostile.  The reason such anomalies occur is
because when attackers craft exploits, they follow enough of the protocol to
make it work but often don't care about the undocumented conventions followed
by normal servers and clients.  For example, they may use lower case commands
where most software uses upper case, or they may put unusual but legal values
in the TCP or IP-ID fields or a hundred other things that make the attack
stand out.  Even if they are careful, many exploits require unusual commands
or combinations of options that rarely appear in normal traffic and are
therefore less carefully tested.

So my point is that it is pointless to try to make an anomaly detection system
explain its reasoning, because the only explanation is that the traffic is
unusual.  The best you can do is have it estimate the probability of a false
alarm based on the information content.

So the lesson is that AGI is not the only intelligent system where you should
not waste your time trying to understand what it has learned.  Even if you
understood it, it would not tell you anything.  Would you understand why a
person made some decision if you knew the complete state of every neuron and
synapse in his brain?


 You developed a pattern-matcher.  The pattern matcher worked (and I would 
 dispute that it worked better than it had a right to).  Clearly, you do 
 not understand how it worked.  So what does that prove?
 
 Your contention (or, at least, the only one that continues the previous 
 thread) seems to be that you are too stupid to ever understand the pattern 
 that it found.
 
 Let me offer you several alternatives:
 1)  You missed something obvious
 2)  You would have understood it if the system could have explained it to 
 you
 3)  You would have understood it if the system had managed to losslessly 
 convert it into a more compact (and comprehensible) format
 4)  You would have understood it if the system had managed to losslessly 
 convert it into a more compact (and comprehensible) format and explained it 
 to your
 5)  You would have understood it if the system had managed to lossily 
 convert it into a more compact (and comprehensible -- and probably even, 
 more correct) format
 6)  You would have understood it if the system had managed to lossily 
 convert it into a more compact (and comprehensible -- and probably even, 
 more correct) format and explained it to you
 
 My contention is that the pattern that it found was simply not translated 
 into terms you could understand and/or explained.
 
 Further, and more importantly, the pattern matcher *doesn't* understand it's
 
 results either and certainly could build upon them -- thus, it *fails* the 
 test as far as being the central component of an RSIAI or being able to 
 provide evidence as to the required behavior of such.
 
 - Original Message - 
 From: Philip Goetz [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Friday, December 01, 2006 7:02 PM
 Subject: Re: [agi] A question on the symbol-system hypothesis
 
 
  On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
  With many SVD systems, however, the representation is more 
  vector-like
  and *not* conducive to easy translation to human terms.  I have two 
  answers
  to these cases.  Answer 1 is that it is still easy for a human to look at
  the closest matches to a particular word pair and figure out what they 
  have
  in common.
 
  I developed an intrusion-detection system for detecting brand new
  attacks on computer systems.  It takes TCP connections, and produces
  100-500 statistics on each connection.  It takes thousands of
  connections, and runs these statistics thru PCA to come up with 5
  dimensions.  Then it clusters each connection, and comes up with 1-3
  clusters per port that have a lot of connections and are declared to
  be normal traffic.  Those connections that lie far from any of those
  clusters are identified as possible intrusions.
 
  The system worked much better than I expected it to, or than it had a
  right to.  I went back and, by hand, tried to figure out how it was
  classifying attacks.  In most cases, my conclusion was that there was
  *no information available* to tell whether

Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Philip Goetz

On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:

With many SVD systems, however, the representation is more vector-like
and *not* conducive to easy translation to human terms.  I have two answers
to these cases.  Answer 1 is that it is still easy for a human to look at
the closest matches to a particular word pair and figure out what they have
in common.


I developed an intrusion-detection system for detecting brand new
attacks on computer systems.  It takes TCP connections, and produces
100-500 statistics on each connection.  It takes thousands of
connections, and runs these statistics thru PCA to come up with 5
dimensions.  Then it clusters each connection, and comes up with 1-3
clusters per port that have a lot of connections and are declared to
be normal traffic.  Those connections that lie far from any of those
clusters are identified as possible intrusions.

The system worked much better than I expected it to, or than it had a
right to.  I went back and, by hand, tried to figure out how it was
classifying attacks.  In most cases, my conclusion was that there was
*no information available* to tell whether a connection was an attack,
because the only information to tell that a connection was an attack
was in the TCP packet contents, while my system looked only at packet
headers.  And yet, the system succeeded in placing about 50% of all
attacks in the top 1% of suspicious connections.  To this day, I don't
know how it did it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Matt Mahoney

--- Philip Goetz [EMAIL PROTECTED] wrote:

 On 11/30/06, James Ratcliff [EMAIL PROTECTED] wrote:
  One good one:
  Consciousness is a quality of the mind generally regarded to comprise
  qualities such as subjectivity, self-awareness, sentience, sapience, and
 the
  ability to perceive the relationship between oneself and one's
 environment.
  (Block 2004).
 
  Compressed: Consciousness = intelligence + autonomy
 
 I don't think that definition says anything about intelligence or
 autonomy.  All it is is a lot of words that are synonyms for
 consciousness, none of which really mean anything.

I think if you insist on an operational definition of consciousness you will
be confronted with a disturbing lack of evidence that it even exists.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-01 Thread Kashif Shah

A little late on the draw here - I am a new member to the list and was
checking out the archives.  I had an insight into this debate over
understanding.

James Ratcliff wrote:

Understanding is a dum-dum word, it must be specifically defined as a
concept
or not used.  Understanding art is a Subjective question.  Everyone has
their
own 'interpretations' of what that means, either brush stokes, or style, or
color, or period, or content, or inner meaning.
But you CANT measure understanding of an object internally like that. There
MUST be an external measure of understanding.

My insight was this:  to ask 'do you understand x?' is too simple for the
subjective realm.  One must qualify with a phrase such as (in the context of
art) 'do you understand x in relation to y' or 'do you understand x as
representing y' or 'do you understand x as a possible meaning for y', etc.
By externally specifying the y, one can gain an objective 'picture' of the
internal subjective state of a person or an AI.  Of course this makes things
pretty complicated when one must analyze all possible y's, however, this
could even become a job for an AI, couldn't it?  If one knows the (or a) set
of possible interpretations (y's), one can begin to inquire as to the
understanding of x within an intelligence.



I would appreciate your feedback.

Thanks for your time,

Kashif Shah

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser
   Bah!  I hate it when I rush and get stupid.  This is why I'm not a 
politician (and why I think the Republican tirade/crusade against 
flip-flopping is so damaging/dangerous).  But, it does serve to illustrate 
a number of useful points so I'll just go with it . . . .


   I'm writing this e-mail without any additional external information than 
I had last night (though I expect to shortly be reading *several* e-mails 
from the list telling me what an idiot I am  :-).  However, my subconscious 
knowledge-retrieval processes have finally seen fit to provide me with a 
number of You know . . . . s.  I think that observations of this type are 
very important to make when considering building an AI.  Not all 
observations will be compiled into knowledge and not all knowledge will be 
immediately accessible to a system even if the system has what it needs to 
retrieve/derive the knowledge.  Designs that assume total knowledge 
integrity and retrieval are exactly as bad as designs that assume infinite 
processing power and memory.


   Clearly, Philip is referring to the analogies questions of the SAT (not 
the synonym questions that I got stuck on last night).  Clearly, vectors 
have direction in addition to distance.  And, clearly, Philip is referring 
to the fact that the directions/vectors that the system generates are not in 
human-readable form . . . . (though I would argue that they are easily 
human-comprehensible if you write a translator).


   I'm tempted to make a digression into how much common 
knowledge/world-modeling we assume/rely upon -- knowledge that my brain was 
not coming up with last night and replacing with a poor substitute instead


   So let me extend and refine my stupid answer (because the core *is* 
still fundamentally correct) . . . .


   Training SVDs on a given corpus produces a database that is always 
fundamentally isomorphic to pairs of word-pairs and their similarity 
distances (normally expressed as the number and frequency of 
dimensions/common-usages they have in common) through a very simple 
algorithm that compares how they are used in sentences.  There are, of 
course, also various representations that appear more vector-like but the 
fundamental isomorphism remains.


   With the simplest SVD algorithms and most obvious cases, these 
directions can often be easily translated into human terms.  For example, 
hat/head and hands/gloves both have dimensionalities of wore and wear. 
(Note, however, that if you wrote the SAT test specifically to confuse this 
type of system without messing with humans, you could have examples like 
yarmulke/temple (dimensions wear-in and wear-to) include possibly 
system-acceptable answers like hole/sock to distract from tuxedo/dance).


   With many SVD systems, however, the representation is more vector-like 
and *not* conducive to easy translation to human terms.  I have two answers 
to these cases.  Answer 1 is that it is still easy for a human to look at 
the closest matches to a particular word pair and figure out what they have 
in common.  Answer 2 is that I still contend that this is a major design 
flaw (which can also be rectified by taking the time to write a translator). 
You really, really, *really* don't want to create an intelligence that may 
be both smarter/faster than you and seriously flawed -- and statistical 
knowledge is very, very shallow; very prone to certain types of error; and 
*not* particularly conducive to being built upon (unless, of course, you use 
it merely as a subsystem and you're packing up it's results and sending them 
to an entirely different type of system).  You clearly do *not* want a 
system of this type at the core of your AI's reasoning processes --  
particularly since, I contend, this type of system is frequently (and in 
some classes of systems which are well behaved, always) isomorphic to a 
system that *is* easily human-comprehensible.  (Note that neural networks, 
in particular, are a class of system that are *not* well behaved because the 
internal data structures formed by the neural network algorithms that we 
know most frequently do not correspond to the real-world simplest 
explanation unless you get really, really lucky in choosing your number of 
nodes and your connections.  Nature has clearly found a way around this 
problem but we do not know this solution yet.)


   Mark (going off to be plastered by replies to last night's message)

- Original Message - 
From: Mark Waser [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 6:21 PM
Subject: Re: [agi] A question on the symbol-system hypothesis



Yes, it was insulting.  I am sorry.  However, I don't think this
conversation is going anywhere.  There are many, many examples just of
the use of SVD and PCI that I think meet your criteria.  The one I
mentioned earlier, to you, that uses SVD on word-pair similarities,
and scores at human-level on the SAT, is an example.  There are
thousands

Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser

Do you disagree with any of this?


:-)  Fundamentally, no but I'm suddenly desirous of better defining black 
box or dividing black-box systems into a couple (or maybe more) broad 
categories.


   Most of your counter-examples really can be combined simply as Genetic 
Algorithms whose behavior generally *doesn't* turn out to be black-box in 
terms of human-explainability/comprehensibility (which was the feature of 
black-boxness that we were debating).  Further, good Genetic Algorithms, 
unlike most classic neural networks and statistical approaches, *do* tend to 
converge on answers that correspond to real-world simplest explanations and 
therefore provide a good foundation to build intelligence upon.


   My arguments would probably be better restated as being against 
human-incomprehensible (primarily statistical and normally not simplest 
explanation matching) systems rather then using the probably misunderstood 
term black-box.


   Would you argue that any of your examples produce good results that are 
not comprehensible by humans?  I know that you sometimes will argue that the 
systems can find patterns that are both the real-world simplest explanation 
and still too complex for a human to understand -- but I don't believe that 
such patterns exist in the real world (I'd ask you to provide me with an 
example of such a pattern to disprove this belief -- but I wouldn't 
understand it  :-).


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 9:36 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:

On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:

 I defy you to show me *any* black-box method that has predictive power
 outside the bounds of it's training set.  All that the black-box 
 methods are
 doing is curve-fitting.  If you give them enough variables they can 
 brute

 force solutions through what is effectively case-based/nearest-neighbor
 reasoning but that is *not* intelligence.  You and they can't build 
 upon

 that.

If you look into the literature of the past 20 years, you will easily
find several thousand examples.


Mark, I believe your point is overstated, although probably based on a
correct intuition

Plenty of black box methods can extrapolate successfully beyond
their training sets, and using approaches not fairly describable as
case based or nearest neighbor.

CBR and nearest neighbor are very far from optimally-performing as far
as prediction/categorization algorithms go.

As examples of learning algorithms that
-- successfully extrapolate beyond their training sets using
pattern-recognition much more complex than CBR/nearest-neighbor
-- do so by learning predictive rules that are opaque to humans, in 
practice

I could cite a bunch, e.g.
-- SVM's
-- genetic programming
-- MOSES (www.metacog.org), a probabilistic evolutionary method used
in Novamente
-- Eric Baum's Hayek system
-- recurrent neural nets trained using recurrent backprop or marker-based 
GA's

-- etc. etc. etc.

These methods are definitely not human-level AGI.  But, they
definitely do extrapolate beyond their training set, via recognizing
complex patterns in their training sets far beyond
CBR/nearest-neighbor.

What these methods do not do, yet at least, is to extrapolate to data
of a **radically different type** from their training set.

For instance, suppose you train an SVM algorithm to recognize gene
expression patterns indicative of lung cancer, by exposing it to data
from 50 lung cancer patients and 50 controls.  Then, the SVM can
generalize to predict whether a new person has lung cancer or not --
whether or not this person particularly resembles **any** of the 100
people on whose data the SVM was trained.  It can do so by paying
attention to a complex nonlinear combination of features, whose
meaning may well not be comprehensible to any human within a
reasonable amount of effort.  This is not CBR or nearest-neighbor.  It
is a more fundamental form of learning, displaying much greater
compression and pattern-recognition and hence greater generalization.

On the other hand, if you want to apply the SVM to breast cancer, you
have to run it all over again, on different data.  And if you want to
apply it to cancer in general you need to specifically feed it
training data regarding a variety of cancers.  You can't feed it
training data regarding breast, lung and liver cancer separately, have
it learn predictive rules for each of these and then have it
generalize these predictive rules into a rule for cancer in
general

In a sense, SVM is just doing curve-fitting, sure

But in a similar sense, Marcus Hutter's AIXItl theorems show that
given vast computational resources, an arbitrarily powerful level of
intelligence can be achieved via curve-fitting.

Human-level AGI represents curve-fitting at a level of generality
somewhere between

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Ben Goertzel

Would you argue that any of your examples produce good results that are
not comprehensible by humans?  I know that you sometimes will argue that the
systems can find patterns that are both the real-world simplest explanation
and still too complex for a human to understand -- but I don't believe that
such patterns exist in the real world (I'd ask you to provide me with an
example of such a pattern to disprove this belief -- but I wouldn't
understand it  :-).


Well, it really depends on what you mean by too complex for a human
to understand.

Do you mean

-- too complex for a single human expert to understand within 1 week of effort
-- too complex for a team of human experts to understand within 1 year of effort
etc.
-- fundamentally too complex for humans to understand, ever

??

My main point in this regard is that a machine learning algorithm can
find a complex predictive pattern, in a few seconds or minutes of
learning, that is apparently inscrutable to humans -- and that remains
inscrutable to an educated human after hours or days of scrutiny.

This doesn't mean the pattern is **fundamentally impossible** for
humans to understand, of course... though in some cases it might
conceivably be (more on that later)

As an example consider ensemble-based prediction algorithms.  In this
approach, you make a prediction by learning say 1000 or 10,000
predictive rules (by one or another machine learning algorithm), each
of which may make a prediction that is just barely statistically
significant.  Then, you use some sort of voting or estimate-merging
mechanism (and there are some subtle ones  as well as simple ones,
e.g. ranging from simple voting to an approach that tries to find a
minimum-entropy prob. distribution for the underlying reality
explaining the variety of individal predictions)

So, what if we make a prediction about the price of Dell stock tomorrow by

-- learning (based on analysis of historical price data) 10K weak
predictive rules, each of which is barely meaningful, and each of
which combines a few dozen relevant factors
-- merging the predictions of these models using an
entropy-minimization estimate-merging algorithm

Then we are certainly not just using nearest-neighbor or CBR or
anything remotely like that.

Yet, can a human understand why the system made the prediction it did?

Not readily

Maybe, after months of study -- statistically analyzing the 10K models
in various ways, etc. -- a human could puzzle out this system's one
prediction.   But the predictive system may make similar predictions
for a whole bunch of stocks, every day

There is plenty of evidence in the literature that ensemble methods
like this outperform individual-predictive-model methods.  And there
is plenty of evidence suggesting that the brain uses ensemble methods
(i.e. it combines together multiple unreliable estimates to get a
single reliable one) in simple contexts, so maybe it does in complex
contexts too...

I would also note that, on a big enough empirical dataset, an
algorithmic approach like SVM or the ensemble method described above
definitely COULD produce predictive rules that were fundamentally
incomprehensible to humans --- in the sense of having an algorithmic
information content greater than that of the human brain.  This is
quite a feasible possibility.  But I don't claim that this is the case
with these algorithms as applied in the present day, in fact I doubt
it.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread James Ratcliff
Richard Loosemore [EMAIL PROTECTED] wrote: Philip Goetz wrote:
 On 11/17/06, Richard Loosemore  wrote:
 I was saying that *because* (for independent reasons) these people's
 usage of terms like intelligence is so disconnected from commonsense
 usage (they idealize so extremely that the sense of the word no longer
 bears a reasonable connection to the original) *therefore* the situation
 is akin to the one that obtains for Model Theory or Rings.

 I am saying that these folks are trying to have their cake and eat it
 too:  they idealize intelligence into something so disconnected from
 the real world usage that, really, they ought not to use the term, but
 should instead invent another one like ooblifience to describe the
 thing they are proving theorems about.

 But then, having so distorted the meaning of the term, they go back and
 start talking about the conclusions they derived from their math as if
 those conclusions applied to the real world thing that in commonsense
 parlance we call intelligence.  At that point they are doing what I
 claimed a Model Theorist would be doing if she started talking about a
 kid's model airplane as if Model Theory applied to it.
 
 This is exactly what John Searle does with the term consciousness.

Exactly!!
Buzz-Word alert :}
Define these words to what you mean, and then go on, either the AI matches or 
can match the definition or it cant, but just saying these words doesnt get 
anywhere other than arguing their meaning... 

One good one: 
Consciousness is a quality of the mind generally regarded to comprise qualities 
such as subjectivity, self-awareness, sentience, sapience, and the ability to 
perceive the relationship between oneself and one's environment.  (Block 2004).

Compressed: Consciousness = intelligence + autonomy

Intelligence and Consciousness are both directly tied together. 
Though it may be possible to develop machine intelligence (re google or qustion 
answering / expert systems) that do not have autonomy, ie only respond when 
asked a question.

James



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Cheap Talk? Check out Yahoo! Messenger's low PC-to-Phone call rates.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-30 Thread Mark Waser

Well, it really depends on what you mean by too complex for a human
to understand. Do you mean
-- too complex for a single human expert to understand within 1 week of 
effort
-- too complex for a team of human experts to understand within 1 year of 
effort

-- fundamentally too complex for humans to understand, ever


Actually, I'm willing to stake my claim to too complex for a single human 
expert to understand within 1 week of effort.



My main point in this regard is that a machine learning algorithm can
find a complex predictive pattern, in a few seconds or minutes of
learning, that is apparently inscrutable to humans -- and that remains
inscrutable to an educated human after hours or days of scrutiny.


Take that complex predictive pattern.  Assume that it is in or that can be 
translated to it's simplest correct yet complete form.  Assume that it is 
translated to the most human-friendly representation possible . . . .


I would contend that *all* complex predictive patterns that human-level and 
even near-super-human AGIs are likely to be able to extract/generate are 
reducible to maps which partition an n-space into areas where the 
predictions are constants or reasonably simple formulas -- and that humans 
can easily handle any prediction likely to be made by a human or even 
near-superhuman AI.  In day-to-day life, our world is not controlled enough 
and regular enough that we (or any near-human system) can collect enough 
data to *correctly* extract formulas with a large enough number of 
inextricably interlinked variables that we can't understand it.


It is possible that, eventually, a true super-human level AI will take a ton 
of data with a large number of irreducibly interacting variables that 
interact differently in a tremendous number of partitions -- but, I'm not at 
all convinced that our world is regular enough/would provide enough 
controlled data where it's also the case that the interactions are so 
interlinked that the problem can't be decomposed -- and I certainly don't 
expect to see it anytime in the near future or see it as a *requirement* for 
AGI.


(And, yes, I will acknowledge that I cheated tremendously with my Take that 
complex predictive pattern paragraph since doing those things requires 
human-level intelligence).


So, what if we make a prediction about the price of Dell stock tomorrow by 
snip

Then we are certainly not just using nearest-neighbor or CBR or
anything remotely like that.


But the behavior across a phase change is going to be just as incorrect.


Yet, can a human understand why the system made the prediction it did?
Not readily


Again, I've got two answers.  First, in part, this is because the system is 
not expressing (or even deriving) the rules in the simplest correct yet 
complete form.  Second, the human understands the prediction as well as 
the system does (i.e. as a collection of unrelated rules derived from 
previous data with weights added together).  I would contend that knowledge 
and understanding are measured by predictive power -- particularly under 
novel circumstances (to separate them from simple pattern-matching).  The 
system is doing what it does faster than a human can but it really isn't 
doing anything that a human can't (and certainly not anything that the human 
can't understand).



I would also note that, on a big enough empirical dataset, an
algorithmic approach like SVM or the ensemble method described above
definitely COULD produce predictive rules that were fundamentally
incomprehensible to humans --- in the sense of having an algorithmic
information content greater than that of the human brain.  This is
quite a feasible possibility.  But I don't claim that this is the case
with these algorithms as applied in the present day, in fact I doubt
it.


:-)  I missed this paragraph the first time through.  It sounds like my 
argument except I have more skepticism about the world being regular enough 
and the myriad of other variables being controlled enough that the *data* 
for SVM to do this is going to be collected any time soon.  (Note to mention 
that, by the time it happens, I fully expect that the algorithmic 
information capacity of the human brain will be severely augmented :-) (and 
even so, it's really just more of the same except that it's run across the 
phase change where we poor limited humans have run out of capacity -- not 
the complete change in understanding that you see between us and the lower 
animals).



- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 30, 2006 9:30 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis


Would you argue that any of your examples produce good results that 
are
not comprehensible by humans?  I know that you sometimes will argue that 
the
systems can find patterns that are both the real-world simplest 
explanation
and still too complex for a human to understand -- but I don't

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz

On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote:


 Matt Mahoney wrote:
 Models that are simple enough to debug are too simple to scale.
 The contents of a knowledge base for AGI will be beyond our ability to
comprehend.

Given sufficient time, anything should be able to be understood and
debugged.  Size alone does not make something incomprehensible and I defy
you to point at *anything* that is truly incomprehensible to a smart human
(for any reason other than we lack knowledge on it).


He did, in the post that you were replying to.  The paper linked to
used SVD on a large corpus to produce vectors measuring the similarity
between similarities between pairs of words.  You cannot look at those
vectors and understand them.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser

Matt was not arguing over whether what an AI does should be called
understanding or statistics.  Matt was discussing what the right
way to design an AI is.


And Matt made a number of statements that I took issue with -- the current 
one being that an AI's reasoning wouldn't be human-understandable.  Why 
don't we stick with that point?



It is the human who (at first) designs the AI.


And your point is?  I'm arguing that the AI's reasoning should/will be 
human-understandable.  You're arguing that it will not be.  And then, you're 
arguing that since it is the human who (at first) designs the AI that it 
proves *your* point?



Designs that require the designer to have super-human abilities
are poor designs.


Designs that require infeasible computational requirements are poor designs. 
Designs that can't be debugged are poor designs.  I'm not requiring 
super-human abilities at all -- *you* are.  It is *your* contention that 
understanding the AI's reasoning will require superhuman abilities.  I don't 
see that at all.  It's all just data and algorithms.


Your previous example of vectors not being understandable because it is 
millions of data points conflates several interpretations of understanding 
to confuse the issue and doesn't prove your point at all.  Mathematically, 
vector fields are fundamentally isomorphic with neural networks and/or 
matrix algebra.  In all three cases, you are deriving (via various methods) 
the best n equations to describe a given test dataset.  Given a given 
*ordered* data set and the training algorithm, a human can certainly 
calculate the final vectors/weights/equations.  A human who knows the 
current vectors/weights/equations can certainly calculate the output when a 
system is presented with a given new point.  What a human can't do is to 
describe why, in the real world, that particular vector may be optimum and 
the reason why the human can't is because *IT IS NOT OPTIMUM* for the real 
world except in toy cases!  All three of the methods are *very* subject to 
overfitting and numerous other maladies unless a) the number of 
vectors/nodes/equations is exactly correct for the problem (and we currently 
don't know any good algorithms to ensure this) and b) the number of test 
examples is *much* larger than the variables involved in the solution and 
the vectors/network are/is *very* thoroughly trained (either computationally 
infeasible for large, complicated problems with many variables if you try to 
go for the minimal correct number of vectors/nodes/equations OR having only 
nearest match capability and *zero* predictive power if you allow too many 
vectors/nodes/equations).


I defy you to show me *any* black-box method that has predictive power 
outside the bounds of it's training set.  All that the black-box methods are 
doing is curve-fitting.  If you give them enough variables they can brute 
force solutions through what is effectively case-based/nearest-neighbor 
reasoning but that is *not* intelligence.  You and they can't build upon 
that.



Thus, the machine-learning black-box approach is a better design.


Why? Although this is a nice use of buzzwords, I strongly disagree for 
numerous reasons and, despite your thus, your previous arguments certainly 
don't lead to this conclusion.  Obviously, any design that I consider is 
using machine-learning -- but machine-learning does not imply black-box . . 
. .  And since all black-box means is that you can't see inside it, it only 
seems like an invitation to disaster to me.  So why is it a better design? 
All that I see here is something akin to I don't understand it so it must 
be good.



- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:53 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:

 A human doesn't have enough time to look through millions of pieces of
 data, and doesn't have enough memory to retain them all in memory, and
 certainly doesn't have the time or the memory to examine all of the
 10^(insert large number here) different relationships between these
 pieces of data.

True, however, I would argue that the same is true of an AI.  If you 
assume

that an AI can do this, then *you* are not being pragmatic.

Understanding is compiling data into knowledge.  If you're just brute
forcing millions of pieces of data, then you don't understand the 
problem --

though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.


Matt was not arguing over whether what an AI does should be called
understanding or statistics.  Matt was discussing what the right
way to design an AI is.  It is the human who (at first) designs the
AI.  Designs that require the designer to have super-human abilities
are poor designs.  Thus, the machine-learning black-box approach is a
better

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser
AI is about solving problems that you can't solve yourself.  You can 
program a computer to beat you at chess.  You understand the search 
algorithm, but can't execute it in your head.  If you could, then you 
could beat the computer, and your program will have failed.


   I disagree.  AI is about creating a reasoning system (that may well be 
faster than I am).  Even if it is slower than I am, I still will have 
succeeded (if only because Moore's Law will ensure that it will eventually 
become faster).  The computer can beat me at chess because it can 
brute-forced search faster (and thus, more during a given time period) than 
I can. However, with hindsight, I can certainly understand how it beat me.


Likewise, you should be able to program a computer to solve problems that 
are beyond your capacity to understand.  You understand the learning 
algorithm, but not what it has learned.  If you could understand how it 
arrived at a particular solution, then you have failed to create an AI 
smarter than yourself.


   I disagree.  I don't believe that there is anything that is beyond my 
capacity to understand (given sufficient time).  I may not be able to 
calculate something but if some reasoning system can explain it's reasoning, 
I can certainly verify it.  I keep challenging you to show me something that 
is beyond my understanding.  Phil Goetz has argued that vector systems are 
not understandable but it is my contention that vector systems are merely 
curve-fitting approximation systems that don't have anything to understand 
(since in virtually all cases they either conflate real-world variables --  
if n is too small -- or split and overfit real-world variables -- if n is to 
large).


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 2:13 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


AI is about solving problems that you can't solve yourself.  You can program 
a computer to beat you at chess.  You understand the search algorithm, but 
can't execute it in your head.  If you could, then you could beat the 
computer, and your program will have failed.


Likewise, you should be able to program a computer to solve problems that 
are beyond your capacity to understand.  You understand the learning 
algorithm, but not what it has learned.  If you could understand how it 
arrived at a particular solution, then you have failed to create an AI 
smarter than yourself.


-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:25:33 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis


A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.


True, however, I would argue that the same is true of an AI.  If you assume
that an AI can do this, then *you* are not being pragmatic.

Understanding is compiling data into knowledge.  If you're just brute
forcing millions of pieces of data, then you don't understand the problem --
though you may be able to solve it -- and validating your answers and
placing intelligent/rational boundaries/caveats on them is not possible.

- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:14 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 11/14/06, Mark Waser [EMAIL PROTECTED] wrote:

 Even now, with a relatively primitive system like the current
 Novamente, it is not pragmatically possible to understand why the
 system does each thing it does.

Pragmatically possible obscures the point I was trying to make with
Matt.  If you were to freeze-frame Novamente right after it took an
action,
it would be trivially easy to understand why it took that action.

 because
 sometimes judgments are made via the combination of a large number of
 weak pieces of evidence, and evaluating all of them would take too
 much time

Looks like a time problem to me . . . . NOT an incomprehensibility
problem.


This argument started because Matt said that the wrong way to design
an AI is to try to make it human-readable, and constantly look inside
and figure out what it is doing; and the right way is to use math and
statistics and learning.

A human doesn't have enough time to look through millions of pieces of
data, and doesn't have enough memory to retain them all in memory, and
certainly doesn't have the time or the memory to examine all of the
10^(insert large number here) different relationships between these
pieces of data.  Hence, a human shouldn't design AI systems in a way
that would require a human to have these abilities

Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz

On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:


I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set.  All that the black-box methods are
doing is curve-fitting.  If you give them enough variables they can brute
force solutions through what is effectively case-based/nearest-neighbor
reasoning but that is *not* intelligence.  You and they can't build upon
that.


If you look into the literature of the past 20 years, you will easily
find several thousand examples.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Mark Waser

If you look into the literature of the past 20 years, you will easily
find several thousand examples.


   I'm sorry but either you didn't understand my point or you don't know 
what you are talking about (and the constant terseness of your replies gives 
me absolutely no traction on assisting you).  If you would provide just one 
example and state why you believe it refutes my point, then you'll give me 
something to answer -- as it is, you're making a meaningless assertion of no 
value that I can't even begin to respond to (not to mention the point that 
contending/assuming that I've overlooked several thousand examples is pretty 
insulting).


- Original Message - 
From: Philip Goetz [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 4:17 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:


I defy you to show me *any* black-box method that has predictive power
outside the bounds of it's training set.  All that the black-box methods 
are

doing is curve-fitting.  If you give them enough variables they can brute
force solutions through what is effectively case-based/nearest-neighbor
reasoning but that is *not* intelligence.  You and they can't build upon
that.


If you look into the literature of the past 20 years, you will easily
find several thousand examples.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz

On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:

 If you look into the literature of the past 20 years, you will easily
 find several thousand examples.

I'm sorry but either you didn't understand my point or you don't know
what you are talking about (and the constant terseness of your replies gives
me absolutely no traction on assisting you).  If you would provide just one
example and state why you believe it refutes my point, then you'll give me
something to answer -- as it is, you're making a meaningless assertion of no
value that I can't even begin to respond to (not to mention the point that
contending/assuming that I've overlooked several thousand examples is pretty
insulting).


Yes, it was insulting.  I am sorry.  However, I don't think this
conversation is going anywhere.  There are many, many examples just of
the use of SVD and PCI that I think meet your criteria.  The one I
mentioned earlier, to you, that uses SVD on word-pair similarities,
and scores at human-level on the SAT, is an example.  There are
thousands of examples.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Philip Goetz

On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:

I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a reasonable connection to the original) *therefore* the situation
is akin to the one that obtains for Model Theory or Rings.

I am saying that these folks are trying to have their cake and eat it
too:  they idealize intelligence into something so disconnected from
the real world usage that, really, they ought not to use the term, but
should instead invent another one like ooblifience to describe the
thing they are proving theorems about.

But then, having so distorted the meaning of the term, they go back and
start talking about the conclusions they derived from their math as if
those conclusions applied to the real world thing that in commonsense
parlance we call intelligence.  At that point they are doing what I
claimed a Model Theorist would be doing if she started talking about a
kid's model airplane as if Model Theory applied to it.


This is exactly what John Searle does with the term consciousness.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Matt Mahoney
So what is your definition of understanding?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 5:36:39 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

On 11/19/06, Matt Mahoney [EMAIL PROTECTED] wrote:
 I don't think is is possible to extend the definition of understanding to 
 machines in a way that would be generally acceptable, in the sense that 
 humans understand understanding.  Humans understand language.  We don't 
 generally say that animals in the wild understand their environment, although 
 we do say that animals can be trained to understand commands.

I generally say that animals in the wild understand their environment.
 If you don't, you are using a definition of understand that I don't
understand.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Ben Goertzel

On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:

On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:

 I defy you to show me *any* black-box method that has predictive power
 outside the bounds of it's training set.  All that the black-box methods are
 doing is curve-fitting.  If you give them enough variables they can brute
 force solutions through what is effectively case-based/nearest-neighbor
 reasoning but that is *not* intelligence.  You and they can't build upon
 that.

If you look into the literature of the past 20 years, you will easily
find several thousand examples.


Mark, I believe your point is overstated, although probably based on a
correct intuition

Plenty of black box methods can extrapolate successfully beyond
their training sets, and using approaches not fairly describable as
case based or nearest neighbor.

CBR and nearest neighbor are very far from optimally-performing as far
as prediction/categorization algorithms go.

As examples of learning algorithms that
-- successfully extrapolate beyond their training sets using
pattern-recognition much more complex than CBR/nearest-neighbor
-- do so by learning predictive rules that are opaque to humans, in practice
I could cite a bunch, e.g.
-- SVM's
-- genetic programming
-- MOSES (www.metacog.org), a probabilistic evolutionary method used
in Novamente
-- Eric Baum's Hayek system
-- recurrent neural nets trained using recurrent backprop or marker-based GA's
-- etc. etc. etc.

These methods are definitely not human-level AGI.  But, they
definitely do extrapolate beyond their training set, via recognizing
complex patterns in their training sets far beyond
CBR/nearest-neighbor.

What these methods do not do, yet at least, is to extrapolate to data
of a **radically different type** from their training set.

For instance, suppose you train an SVM algorithm to recognize gene
expression patterns indicative of lung cancer, by exposing it to data
from 50 lung cancer patients and 50 controls.  Then, the SVM can
generalize to predict whether a new person has lung cancer or not --
whether or not this person particularly resembles **any** of the 100
people on whose data the SVM was trained.  It can do so by paying
attention to a complex nonlinear combination of features, whose
meaning may well not be comprehensible to any human within a
reasonable amount of effort.  This is not CBR or nearest-neighbor.  It
is a more fundamental form of learning, displaying much greater
compression and pattern-recognition and hence greater generalization.

On the other hand, if you want to apply the SVM to breast cancer, you
have to run it all over again, on different data.  And if you want to
apply it to cancer in general you need to specifically feed it
training data regarding a variety of cancers.  You can't feed it
training data regarding breast, lung and liver cancer separately, have
it learn predictive rules for each of these and then have it
generalize these predictive rules into a rule for cancer in
general

In a sense, SVM is just doing curve-fitting, sure

But in a similar sense, Marcus Hutter's AIXItl theorems show that
given vast computational resources, an arbitrarily powerful level of
intelligence can be achieved via curve-fitting.

Human-level AGI represents curve-fitting at a level of generality
somewhere between that of the SVM and that of AIXItl.  But
curve-fitting at the human level of generality, given the humanly
feasible amount of computational resources, does seem to involve many
properties not characteristic either of
-- curve-fitting algorithms as narrow as SVM, GP, etc.
-- curve-fitting algorithms as broad (but computationally infeasible) as AIXItl

Do you disagree with any of this?

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-29 Thread Richard Loosemore

Philip Goetz wrote:

On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:

I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears a reasonable connection to the original) *therefore* the situation
is akin to the one that obtains for Model Theory or Rings.

I am saying that these folks are trying to have their cake and eat it
too:  they idealize intelligence into something so disconnected from
the real world usage that, really, they ought not to use the term, but
should instead invent another one like ooblifience to describe the
thing they are proving theorems about.

But then, having so distorted the meaning of the term, they go back and
start talking about the conclusions they derived from their math as if
those conclusions applied to the real world thing that in commonsense
parlance we call intelligence.  At that point they are doing what I
claimed a Model Theorist would be doing if she started talking about a
kid's model airplane as if Model Theory applied to it.


This is exactly what John Searle does with the term consciousness.


Exactly!!




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread James Ratcliff
Agreed,
 but I think as a first level project I can accept the limitiation of modeliing 
the AI 'as' a human, as we are a long way off of turning it loose as its own 
robot, and this will allow it to act and reason more as we do.  Currently I 
have PersonAI as a subset of Person, where it will inherit most things from a 
Person, but could have sublte differences later.
  Your bell pepper example is a reasonable one, and we handle that by having a 
full belief system for every individual, and can model others belief systems as 
well internally.

On the other topic, to check its goals as a measure of understanding, we simply 
need to have it tell or explain these goals to us.

James

Charles D Hixson [EMAIL PROTECTED] wrote: James Ratcliff wrote:
 ...
 
  So if one AI saw an apple and said, I can throw / cut / eat it, and
  weighted those ideas. and the second had the same list, but weighted
  eat as more likely, and/or knew people sometimes cut it before eating
  it. Then the AI would understand to a higher level.
  Likewise if instead, one knew you could bake an apple pie, or apples
  came from apple trees, he would understand more.
 No. That's what I'm challenging. You are relating the apple to the
 human world rather than to the goals of the AI.

 What world do you propose the AGI act in? Yes I posit that it should 
 act and reason according to any and all real world assumptions, and 
 that being centric to the human world.  IE, if an AGI is worried about 
 creating a daily schedule, or designing an optimal building desing, it 
 MUST take into account humans need of restrooms facilities, even 
 though that is not part of ITs requirements or concerns.
   Likewise doors and physically interacting objects are important.
   If you dont model this, you may hope for a AI that is solely 
 computer resident that you can ask hard questions of and receive 
 answers Which is good and fine until those questions have to model 
 anything in the real world, then you have the same problem.   You 
 really must wind up modeling the world and existence, to have a fully 
 useful AGI.
I must act in it's own, as we act, individually, in *our* own worlds.  
There is interaction between these, but they definitely aren't 
identical.  I discover this anew whenever I try to explain to my wife 
why I did something, or she trys to explain the same to me.  Since our 
purposes aren't the same, and our perceptions aren't the same, our bases 
for reasoning are divergent.  Fortunately our conclusions are often 
equivalent, and so the exteriorizations are the same.  But I look at a 
bell pepper with distaste.  I only consider it as an attractive food 
when I'm modeling her model of the universe.

Similarly, to an AI an apple would not be a food.  An AI would only 
model an apple as a food object when it was trying to figure out how a 
person (or some other animal) would view it.  Note the extra level of 
indirection.

Is it safe to cross the street?  Possibly the rules for an AI would be 
drastically different from those of a person.  They might, or might not, 
be similar to the rules for a person in a wheelchair, but expecting them 
to be the same as yours would drastically limit it's ability to persist 
in the physical world.


 
  So it starts looking like a knowledge test then.
 What you are proposing looks like a knowledge test. That's not what I 
 mean.

 Yes, I currently havnt seen any decent definition or explanation of 
 understanding that does not encompass intelligence or knowledge. 

 A couple are here:
 # Understanding is a psychological state in relation to an object or 
 person whereby one is able to think about it and use concepts to be 
 able to deal adequately with that object.
 en.wikipedia.org/wiki/Understanding 
 
 # means the ability to apply broad knowledge to situations likely to be 
 encountered, to recognize significant deviations, and to be able to 
 carry out the research necessary to arrive at reasonable solutions. 
 (250.01.3)
 www.indiana.edu/~iuaudit/glossary.html 
 

 So on first pass, understand is a verb, which implies an actor and 
 an action. One of the above definitions specifically uses knowledge, 
 the other implies it by think about it and use concepts  this 
 thinking and concepts would seem to be stored in some knowledge base 
 either AGI or human based.
   This is very similar to the intelligence definitions that have 
 been floating around as well, which is why I pose that both of these 
 topics should be discussed together, and the only possible real way to 
 see if something understands something else, is to either witness 
 the interactions between them, or to ask them.
   This could be posed in two ways.  The easiest is simply what we do 
 in schools, direct testing.  Unfortunatly, in the real world, you cant 
 merely spit out the answers, you have to act and perform, and there 
 are small bits of interaction knowledge which are required to 
 accomplish many tasks, IE driving, or 

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram

Goals don't necessarily need to be complex or even explicitly defined.  One
goal might just be to minimise the difference between experiences (whether
real or simulated) and expectations.  In this way the system learns what a
normal state of being is, and detect deviations.



On 21/11/06, Charles D Hixson [EMAIL PROTECTED] wrote:


Bob Mottram wrote:


 On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 A system understands a situation that it encounters if it
predictably
 acts in such a way as to maximize the probability of achieving it's
 goals in that situation.




 I'd say a system understands a situation when its internal modeling
 of that situation closely approximates its main salient features, such
 that the difference between expectation and reality is minimised.
 What counts as salient depends upon goals.  So for example I could say
 that I understand how to drive, even if I don't have any detailed
 knowledge of the workings of a car.

 When young animals play they're generating and tuning their models,
 trying to bring them in line with observations and goals.
That sounds reasonable, but how are you determining the match of the
internal modeling to the main salient features.  I propose that you do
this based on it's actions, and thus my definition.  I'll admit,
however, that this still leaves the problem of how to observe what it's
goals are, but I hypothesize that it will be much simpler to examine the
goals in the code than to examine the internal model.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Charles D Hixson
I don't know that I'd consider that an example of an uncomplicated 
goal.  That seems to me much more complicated than simple responses to 
sensory inputs.   Valuable, yes, and even vital for any significant 
intelligence, but definitely not at the minimal level of complexity. 

An example of a minimal goal might be to cause an extended period of 
inter-entity communication, or to find a recharging socket.  Note 
that the second one would probably need to have a hard-coded solution 
available before the entity was able to start any independent 
explorations.  This doesn't mean that as new answers were constructed 
the original might not decrease in significance and eventually be 
garbage collected.  It means that it would need to be there as a 
pre-written answer on the tabula rasa.  (I.e., the tablet can't really 
be blank.  You need to start somewhere, even if you leave and never 
return.)  For the first example, I was thinking of peek-a-boo.


Bob Mottram wrote:


Goals don't necessarily need to be complex or even explicitly 
defined.  One goal might just be to minimise the difference between 
experiences (whether real or simulated) and expectations.  In this way 
the system learns what a normal state of being is, and detect deviations.




On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Bob Mottram wrote:


 On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

 A system understands a situation that it encounters if it
predictably
 acts in such a way as to maximize the probability of
achieving it's
 goals in that situation.




 I'd say a system understands a situation when its internal
modeling
 of that situation closely approximates its main salient
features, such
 that the difference between expectation and reality is minimised.
 What counts as salient depends upon goals.  So for example I
could say
 that I understand how to drive, even if I don't have any detailed
 knowledge of the workings of a car.

 When young animals play they're generating and tuning their models,
 trying to bring them in line with observations and goals.
That sounds reasonable, but how are you determining the match of the
internal modeling to the main salient features.  I propose that
you do
this based on it's actions, and thus my definition.  I'll admit,
however, that this still leaves the problem of how to observe what
it's
goals are, but I hypothesize that it will be much simpler to
examine the
goals in the code than to examine the internal model.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram

Things like finding recharging sockets are really more complex goals built
on top of more primitive systems.  For example, if a robot heading for a
recharging socket loses a wheel its goals should change from feeding to
calling for help.  If it cannot recognise a deviation from the normal
state then it will fail to handle the situation intelligently.  Of course
these things can be hard coded, but hard coding isn't usually a good
strategy, other than as initial scaffolding.  Systems which are not
adaptable are usually narrow and brittle.



On 22/11/06, Charles D Hixson [EMAIL PROTECTED] wrote:


I don't know that I'd consider that an example of an uncomplicated
goal.  That seems to me much more complicated than simple responses to
sensory inputs.   Valuable, yes, and even vital for any significant
intelligence, but definitely not at the minimal level of complexity.

An example of a minimal goal might be to cause an extended period of
inter-entity communication, or to find a recharging socket.  Note
that the second one would probably need to have a hard-coded solution
available before the entity was able to start any independent
explorations.  This doesn't mean that as new answers were constructed
the original might not decrease in significance and eventually be
garbage collected.  It means that it would need to be there as a
pre-written answer on the tabula rasa.  (I.e., the tablet can't really
be blank.  You need to start somewhere, even if you leave and never
return.)  For the first example, I was thinking of peek-a-boo.

Bob Mottram wrote:

 Goals don't necessarily need to be complex or even explicitly
 defined.  One goal might just be to minimise the difference between
 experiences (whether real or simulated) and expectations.  In this way
 the system learns what a normal state of being is, and detect
deviations.



 On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:

 Bob Mottram wrote:
 
 
  On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
  A system understands a situation that it encounters if it
 predictably
  acts in such a way as to maximize the probability of
 achieving it's
  goals in that situation.
 
 
 
 
  I'd say a system understands a situation when its internal
 modeling
  of that situation closely approximates its main salient
 features, such
  that the difference between expectation and reality is minimised.
  What counts as salient depends upon goals.  So for example I
 could say
  that I understand how to drive, even if I don't have any
detailed
  knowledge of the workings of a car.
 
  When young animals play they're generating and tuning their
models,
  trying to bring them in line with observations and goals.
 That sounds reasonable, but how are you determining the match of the
 internal modeling to the main salient features.  I propose that
 you do
 this based on it's actions, and thus my definition.  I'll admit,
 however, that this still leaves the problem of how to observe what
 it's
 goals are, but I hypothesize that it will be much simpler to
 examine the
 goals in the code than to examine the internal model.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


 
 This list is sponsored by AGIRI: http://www.agiri.org/email To
 unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Ben Goertzel

Well, in the language I normally use to discuss AI planning, this
would mean that

1)keeping charged is a supergoal

2)
The system knows (via hard-coding or learning) that

finding the recharging socket == keeping charged

(i.e. that the former may be considered a subgoal of the latter)

3)
The system then may construct various plans for find the recharging
socket, and these plans may involve creating various subgoals of this
subgoal

-- Ben

On 11/22/06, Bob Mottram [EMAIL PROTECTED] wrote:


Things like finding recharging sockets are really more complex goals built
on top of more primitive systems.  For example, if a robot heading for a
recharging socket loses a wheel its goals should change from feeding to
calling for help.  If it cannot recognise a deviation from the normal
state then it will fail to handle the situation intelligently.  Of course
these things can be hard coded, but hard coding isn't usually a good
strategy, other than as initial scaffolding.  Systems which are not
adaptable are usually narrow and brittle.



On 22/11/06, Charles D Hixson [EMAIL PROTECTED] wrote:
 I don't know that I'd consider that an example of an uncomplicated
 goal.  That seems to me much more complicated than simple responses to
 sensory inputs.   Valuable, yes, and even vital for any significant
 intelligence, but definitely not at the minimal level of complexity.

 An example of a minimal goal might be to cause an extended period of
 inter-entity communication, or to find a recharging socket.  Note
 that the second one would probably need to have a hard-coded solution
 available before the entity was able to start any independent
 explorations.  This doesn't mean that as new answers were constructed
 the original might not decrease in significance and eventually be
 garbage collected.  It means that it would need to be there as a
 pre-written answer on the tabula rasa.  (I.e., the tablet can't really
 be blank.  You need to start somewhere, even if you leave and never
 return.)  For the first example, I was thinking of peek-a-boo.

 Bob Mottram wrote:
 
  Goals don't necessarily need to be complex or even explicitly
  defined.  One goal might just be to minimise the difference between
  experiences (whether real or simulated) and expectations.  In this way
  the system learns what a normal state of being is, and detect
deviations.
 
 
 
  On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  Bob Mottram wrote:
  
  
   On 17/11/06, *Charles D Hixson*  [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED]
   mailto: [EMAIL PROTECTED]

  mailto:[EMAIL PROTECTED] wrote:
  
   A system understands a situation that it encounters if it
  predictably
   acts in such a way as to maximize the probability of
  achieving it's
   goals in that situation.
  
  
  
  
   I'd say a system understands a situation when its internal
  modeling
   of that situation closely approximates its main salient
  features, such
   that the difference between expectation and reality is minimised.
   What counts as salient depends upon goals.  So for example I
  could say
   that I understand how to drive, even if I don't have any
detailed
   knowledge of the workings of a car.
  
   When young animals play they're generating and tuning their
models,
   trying to bring them in line with observations and goals.
  That sounds reasonable, but how are you determining the match of the
  internal modeling to the main salient features.  I propose that
  you do
  this based on it's actions, and thus my definition.  I'll admit,
  however, that this still leaves the problem of how to observe what
  it's
  goals are, but I hypothesize that it will be much simpler to
  examine the
  goals in the code than to examine the internal model.
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303
 
 
 

  This list is sponsored by AGIRI: http://www.agiri.org/email To
  unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?list_id=303

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=303


 
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Mike Dougherty

On 11/22/06, Ben Goertzel [EMAIL PROTECTED] wrote:


Well, in the language I normally use to discuss AI planning, this
would mean that

1)keeping charged is a supergoal
2)The system knows (via hard-coding or learning) that

finding the recharging socket == keeping charged



If charged becomes momentarily plastic enough to include the analog to the
kind of feeling I have after a good discussion, then the supergoal of being
charged might include the subgoal of attempting conversation with others,
no?

Would you see that as an interesting development, or a potential for a
future mess of inappropriate associations?  Would you try to correct this
attachment?  Directly, or through reconditioning?  I'll stop here because I
see this easily sliding into a question of AI-parenting styles...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread James Ratcliff

Have to amend that to acts or replies
and it could react unpredictably depending on the humans level of 
understanding if it sees a nice neat answer, (like the jumping thru the 
window cause the door was blocked) that the human wasnt aware of, or was 
suprised about it would be equally good.

And this doesnt cover the opposite of what other actions can be done, and what 
are the consequences, that is also important.

And lastly this is for a situation only, we also have the more general case 
about understading a thing  Where when it sees. or has, or is told about a 
thing, it understands it if, it know about general properties, and actions that 
can be done with, or using the thing.

The main thing being we cant and arnt really defining understanding but the 
effect of  the understanding, either in action or in a language reply.

And it should be a level of understanding, not just a y/n.

So if one AI saw an apple and said, I can throw /  cut / eat it, and weighted 
those ideas. and the second had the same list, but weighted eat as more likely, 
and/or knew people sometimes cut it before eating it.  Then the AI would 
understand to a higher level.
Likewise if instead, one knew you could bake an apple pie, or apples came from 
apple trees, he would understand more.

So it starts looking like a knowledge test then.

Maybe we could extract simple facts from wiki, and start creating a test there, 
then add in more complicated things.

James

Charles D Hixson [EMAIL PROTECTED] wrote: Ben Goertzel wrote:
 ...
 On the other hand, the notions of intelligence and understanding
 and so forth being bandied about on this list obviously ARE intended
 to capture essential aspects of the commonsense notions that share the
 same word with them.
 ...
 Ben
Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably 
acts in such a way as to maximize the probability of achieving it's 
goals in that situation.

I'll grant that it's a bit fuzzy, but I believe that it captures the 
essence of the visible evidence of understanding.  This doesn't say what 
understanding is, merely how you can recognize it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Sponsored Link

   Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. 
Intro-*Terms

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
When I refer to a quantity of information, I mean its algorithmic complexity, 
the size of the smallest program that generates it.  So yes, the Mandelbrot set 
contains very little information.  I realize that algorithmic complexity is not 
obtainable in general.  When I express AI or language modeling in terms of 
compression, I mean that the goal is to get as close to this unobtainable limit 
as possible.

Algorithmic complexity can apply to either finite or infinite series.  For 
example, the algorithmic complexity of a string of n zero bits is log n + C for 
some constant C that depends on your choice of universal Turing machine.  The 
complexity of an infinite string of zero bits is a (small) constant C.

When I talk about Kauffman's assertion that complex systems evolve toward the 
boundary between stability and chaos, I mean a discrete approximation of these 
concepts.  These are defined for dynamic systems in real vector spaces 
controlled by differential equations.  (Chaos requires at least 3 dimensions).  
A system is chaotic if its Lyapunov exponent is greater than 1, and stable if 
less than one.  Extensions to discrete systems have been described.  For 
example, the logistic map x := rx(1 - x), 0  x  1, goes from stable to 
chaotic as r grows from 0 to 4.  For discrete spaces, pseudo random number 
generators are simple examples of chaotic systems.  Kauffman studied chaos in 
large discrete systems (state machines with randomly connected logic gates) and 
found that the systems transition from stable to chaotic as the number of 
inputs per gate is increased from 2 to 3.  At the boundary, the number of 
discrete attractors (repeating cycles) is about the square root of the
 number of variables.  Kauffman noted that gene regulation can be modeled this 
way (gene combinations turn other genes on or off) and that the number of human 
cell types (254) is about the square root of the number of genes (he estimated 
100K, but actually 30K).  I noted (coincidentally?) that vocabulary size is 
about the square root of the size of a language model.

The significance of this to AI is that I believe it bounds the degree of 
interconnectedness of knowledge.  It cannot be so great that small updates to 
the AI result in large changes in behavior.  This places limits on what we can 
build.  For example, in a neural network with feedback loops, the weights would 
have to be kept small.

We should not confuse symbols with meaning.  A language model associates 
patterns of symbols with other patterns of symbols.  It is not grounded.  A 
model does not need vision to know that the sky is blue.  They are just words.  
I believe that an ungrounded model (plus a discourse model, which has a sense 
of time and who is speaking) can pass the Turing test.
 
I don't believe all of the conditions are in place for a hard takeoff yet.  You 
need:
1. Self replicating computers.
2. AI smart enough to write programs from natural language specifications.
3. Enough hardware on the Internet to support AGI.
4. Execute access.



1. Computer manufacturing depends heavily on computer automation but you still 
need humans to make it all work.

2. AI language models are now at the level of a toddler, able to recognize 
simple sentences of a few words, but they can already learn in hours or days 
what takes a human years.

3. I estimate an adult level language model will fit on a PC but it would take 
3 years to train it.  A massively parallel architecure like Google's MapReduce 
could do it in an hour, but it would require a high speed network.  A 
distributed implementation like GIMPS or SETI would not have enough 
interconnection speed to support a language model.  I think you need about a 
1Gb/s connection with low latency to distribute it over a few hundred PCs.

4. Execute access is one buffer overflow away.


-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mike Dougherty [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 1:32:05 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

I'm not sure I follow every twist in this thread.  No... I'm sure I don't 
follow every twist in this thread.

I have a question about this compression concept.  Compute the number of pixels 
required to graph the Mandelbrot set at whatever detail you feel to be a 
sufficient for the sake of example.  Now describe how this 'pattern' is 
compressed.  Of course the ideal compression is something like 6 bytes.  Show 
me a 6 byte jpg of a mandelbrot  :)


Is there a concept of compression of an infinite series?  Or was the term 
bounding being used to describe the attractor around which the values tends 
to fall?  chaotic attractor, statistical median, etc.  they seem to be 
describing the same tendency of human pattern recognition of different types of 
data.


Is a 'symbol' an idea, or a handle on an idea?  Does this support the mechanics 
of how concepts can be built from agreed-upon ideas

Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Matt Mahoney
I think your definition of understanding is in agreement with what Hutter 
calls intelligence, although he stated it more formally in AIXI.  An agent and 
an enviroment are modeled as a pair of interactive Turing machines that pass 
symbols back and forth.  In addition, the environment passes a reward signal to 
the agent, and the agent has the goal of maximizing the accumulated reward.  
The agent does not, in general, have a model of the environment, but must learn 
it.  Intelligence is presumed to be correlated with a greater accumulated 
reward (perhaps averaged over a Solomonoff distribution of all environments).
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, November 18, 2006 7:42:19 AM
Subject: Re: [agi] A question on the symbol-system hypothesis


Have to amend that to acts or replies
and it could react unpredictably depending on the humans level of 
understanding if it sees a nice neat answer, (like the jumping thru the 
window cause the door was blocked) that the human wasnt aware of, or was 
suprised about it would be equally good.

And this doesnt cover the opposite of what other actions can be done, and what 
are the consequences, that is also important.

And lastly this is for a situation only, we also have the more general case 
about understading a thing  Where when it sees. or has, or is told about a 
thing, it understands it if, it know about general properties, and actions that 
can be done with, or using the thing.

The main thing being we cant and arnt really defining understanding but the 
effect of  the understanding, either in action or in a language reply.

And it should be a level of understanding, not just a y/n.

So if one AI saw an apple and
 said, I can throw /  cut / eat it, and weighted those ideas. and the second 
had the same list, but weighted eat as more likely, and/or knew people 
sometimes cut it before eating it.  Then the AI would understand to a higher 
level.
Likewise if instead, one knew you could bake an apple pie, or apples came from 
apple trees, he would understand more.

So it starts looking like a knowledge test then.

Maybe we could extract simple facts from wiki, and start creating a test there, 
then add in more complicated things.

James

Charles D Hixson [EMAIL PROTECTED] wrote: Ben Goertzel wrote:
 ...
 On the other hand, the notions of intelligence and understanding
 and so forth being bandied about on this list obviously ARE intended
 to capture essential aspects of the
 commonsense notions that share the
 same word with them.
 ...
 Ben
Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably 
acts in such a way as to maximize the probability of achieving it's 
goals in that situation.

I'll grant that it's a bit fuzzy, but I believe that it captures the 
essence of the visible evidence of understanding.  This doesn't say what 
understanding is, merely how you can recognize it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php 

Sponsored Link

   
Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. Intro-*Terms

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-18 Thread Charles D Hixson

OK.
James Ratcliff wrote:


Have to amend that to acts or replies
I consider a reply an action.  I'm presuming that one can monitor the 
internal state of the program.
and it could react unpredictably depending on the humans level of 
understanding if it sees a nice neat answer, (like the jumping 
thru the window cause the door was blocked) that the human wasnt aware 
of, or was suprised about it would be equally good.
I'm a long way from a AGI, so I'm not seriously considering superhuman 
understanding.  That said, I proposing that you are running the system 
through trials.  Once it has learned a trial, we say it understands 
the trial if it responds correctly.  Correctly is defined in terms of 
the goals of the system rather than in terms of my goals.


And this doesnt cover the opposite of what other actions can be done, 
and what are the consequences, that is also important.

True.  This doesn't cover intelligence or planning, merely understanding.


And lastly this is for a situation only, we also have the more 
general case about understading a thing  Where when it sees. or has, 
or is told about a thing, it understands it if, it know about general 
properties, and actions that can be done with, or using the thing.
You are correct.  I'm presuming that understanding is defined in a 
situation, and that it doesn't automatically transfer from one situation 
to another.  (E.g., I understand English.  Unless the accent is too 
strong.  But I don't understand Hindi, though many English speakers do.)


The main thing being we cant and arnt really defining understanding 
but the effect of  the understanding, either in action or in a 
language reply.
Does understanding HAVE any context free meaning?  It might, but I don't 
feel that I could reasonably assert this.  Possibly it depends on the 
precise definition chosen.  (Consider, e.g., that one might choose to 
use the word meaning to refer to the context-free component of 
understanding.  Would or would not this be a reasonable use of the 
language?  To me this seems justifiable, but definitely not self-evident.)


And it should be a level of understanding, not just a y/n.
Probably, but this might depend on the complexity of the system that one 
was modeling.  I definitely have a partial understanding of How to 
program an AGI.  It's clearly less than 100%, and is probably greater 
than 1%.  It may also depend on the precision with which one is 
speaking.  To be truly precise one would doubtless need to decompose the 
measure along several dimensions...and it's not at all clear that the 
same dimensions would be appropriate in every context.  But this is 
clearly not the appropriate place to start.


So if one AI saw an apple and said, I can throw /  cut / eat it, and 
weighted those ideas. and the second had the same list, but weighted 
eat as more likely, and/or knew people sometimes cut it before eating 
it.  Then the AI would understand to a higher level.
Likewise if instead, one knew you could bake an apple pie, or apples 
came from apple trees, he would understand more.
No.  That's what I'm challenging.  You are relating the apple to the 
human world rather than to the goals of the AI.


So it starts looking like a knowledge test then.

What you are proposing looks like a knowledge test.  That's not what I mean.


Maybe we could extract simple facts from wiki, and start creating a 
test there, then add in more complicated things.


James

*/Charles D Hixson [EMAIL PROTECTED]/* wrote:

Ben Goertzel wrote:
 ...
 On the other hand, the notions of intelligence and understanding
 and so forth being bandied about on this list obviously ARE intended
 to capture essential aspects of the commonsense notions that
share the
 same word with them.
 ...
 Ben
Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably
acts in such a way as to maximize the probability of achieving it's
goals in that situation.

I'll grant that it's a bit fuzzy, but I believe that it captures the
essence of the visible evidence of understanding. This doesn't say
what
understanding is, merely how you can recognize it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Sponsored Link

Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. 
Intro-*Terms 
https://www2.nextag.com/goto.jsp?product=10035url=%2fst.jsptm=ysearch=b_rate150ks=3968p=5035disc=yvers=722



This list is 

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
Sure, but that is not mentioned in the article really.
You discuss lossless compression, and say lossy compression would only give a 
small gain in benefit.

Waht you argued in your reply is NOT 
Compression is Equivalent to General  Intelligenceinstead it is simply, 
Compression is good at text prediction

Further you argue that ideal text compression, if it were possible, would be 
equivalent to passing the Turing test for artificial intelligence (AI).

Using this theory your AI Turing bot would just spit out the Most common 
answer/text for anything.

So if I tell it simply that I am a boy.
then next tell It I am a girl, 
it has no possible way of responding in any realistic manner,
because it has no internal representation, or thoguhts on the matter of the 
dialogue.
Likewise it could not ever be an AGI because it has no 
motivator/planner/decider/reasoner.

There is nothing to it but text.

It could possibly be a good tool or knowledge base for a AGI to reference, but 
it is not intelligent in any way other than a encyclopedia is intelligent, in 
that it is useful to an intelligent agent.

One last point, is a basic premise of computer science, that compression is NOT 
always good, as seen in many ways.
  1. speed - we have the ability to compress video and data files very small, 
but we find that when we need to display or show them that we have to upack and 
make them useful again.  And with the insane rate of growth of storage space, 
its just cheaper to make more, and more, we cant yet fill any storage space up 
with useful knowledge anyway.
  2. access - Google and many others have massive amounts of redundancy. If I 
have something stored in one spot and in another, I can act in a much more 
intelligent fasion.  An index to an encyclopedia, adds NO extra world knowledge 
to it, but it gives me a leg up on finding the information in a different 
fashion.
  Similarly, if I put in a wiki article that Poison ivy causes a rash under the 
poison ivy article, and under the rashes article, a user could access it from 
two different way. This is necessary.

James Ratcliff

Matt Mahoney [EMAIL PROTECTED] wrote: Again, do not confuse the two 
compressions.

In paq8f (on which paq8hp5 is based) I use lossy pattern recognition (like you 
describe, but at a lower level) to extract features to use as context for text 
prediction.  The lossless compression is used to evaluate the quality of the 
prediction.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 1:41:41 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

The main first subtitle: 
Compression is Equivalent to General  IntelligenceUnless your definition of 
Compression is not the simple large amount of text turning into the small 
amount of text.
And likewise with General Intelligence.
I dont think under any of the many many definitions I have seen or created, 
that text or a compress thing can possibly be considered general 
intelligence.
Another way: data != knowledge != intelligence

Intelligence requires something else.  I would say an actor.

Now I would agree that a highly compressed, lossless data could represent a 
good knowledge base.  Yeah that goes good.

But quite simply, a lossy one provides a Better knowledge base, with two 
examples:
1. Poison ivy causes an itching rash for most people
poison oak: The common effect is an irritating, itchy rash.
Can be generalized or combined to:
poison  oak and poison ivy cause an itchy rash.
Which is shorter, and lossy yet better for this fact.
2. If I see something in the road with four legs, and Im about to run it over, 
if I only have rules that say if a deer or dog runs in the road, dont hit it.
Then I cant correctly act, because I only know there is something with 4 legs 
in the road.  
However, if I have a generalized rule in my mind that says 
If something with four legs is in the road, avoid it, then I have a better 
rule.
This better rule cannot be gathered without generalization, and we have to have 
lots of generalization.

The generalizations can be invalidated with exceptions, and we do it all the 
time, thats how we can tell not to pet a skunk instead of a cat.

James Ratcliff


Matt Mahoney [EMAIL PROTECTED] wrote: Richard  Loosemore  wrote:
 5) I have looked at your paper and my feelings are exactly the same as 
 Mark's  theorems developed on erroneous assumptions are worthless.

Which assumptions are erroneous?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore 
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard, what is your definition of understanding?  How would you test 
 whether a person understands art?
 
 Turing offered a behavioral test for intelligence.  My understanding of 
 understanding

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mark Waser
 I think that generaliziation via lossless compression could more readily be 
 a Requirement for an AGI.

Human beings don't do lossless compression so lossless compression clearly 
*isn't* a requirement.  Lossless compression also clearly requires more 
resources than generalization where you are allowed to lose some odd examples.

 Also I must agree with Matt that you cant have knowledge seperate from other 
 knowledge, everything is intertwined, and that is the problem.

You're missing the point.  Yes, knowledge is intertwined; however, look at how 
it works when humans argue/debate.  Knowledge is divided into a small number of 
concepts that both the humans understand (although they may debate the truth 
value of the concepts).  Arguments are normally be easily resolved (even if the 
resolution is agree to disagree) when the humans quickly reach the concepts 
at the root of the pyramid supporting the concept under question -- and that 
pyramid is *never* very large because humans simply don't work that way.  Take 
any debate (even the truly fiery ones) and you'll find that the number of 
concepts involved is *well* less than 100 (if it even reaches twenty).

 It is very difficult to teach a computer something without it knowing ALL 
 other things related to that, because then Some inference it tries to make 
 will be wrong, regardless.

But this is *precisely* how children are taught.  You have to start somewhere 
and you start by saying that certain concepts are just true (even though they 
may not *always* be true) and that it's not worthwhile to examine the concepts 
underneath them unless there's a *really* good reason.  The way in which you 
and Matt are arguing, I need to always know *and* use General Relativity even 
for things that are adequately handled by Newtonian Physics.  Yes, there *will* 
be errors when you reach edge cases (very high speeds in the Physics case) but 
there is *absolutely* no way to avoid this because you virtually never know 
when you're going to wander over a phase change when you're in the realm of new 
experiences.

 There is Nothing, that I know, that humans know that is not in terms of 
 something else, that is one thing that adds to the complexity of the issue.  

Yes, but I believe that there *is* a reasonably effective cognitive closure 
that contains a reasonably small number of concepts which can then apply 
external lookups and learning for everything else that it needs.

 But that means that an architecture for AI will have to have a method for 
 finding these inconsistencies and correcting them with good effeciency.

Yes!  Exactly and absolutely!  In fact, I would almost argue that this is *all* 
that intelligence does . . . .




  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Friday, November 17, 2006 9:13 AM
  Subject: Re: [agi] A question on the symbol-system hypothesis


  I think that generaliziation via lossless compression could more readily be a 
Requirement for an AGI.

  Also I must agree with Matt that you cant have knowledge seperate from other 
knowledge, everything is intertwined, and that is the problem.
  There is Nothing, that I know, that humans know that is not in terms of 
something else, that is one thing that adds to the complexity of the issue.  
  It is very difficult to teach a computer something without it knowing ALL 
other things related to that, because then Some inference it tries to make will 
be wrong, regardless.
But that means that an architecture for AI will have to have a method for 
finding these inconsistencies and correcting them with good effeciency.

  James Ratcliff

  Mark Waser [EMAIL PROTECTED] wrote:
 I don't believe it is true that better compression implies higher 
intelligence (by these definitions) for every possible agent, environment, 
universal Turing machine and pair of guessed programs. 

Which I take to agree with my point.

 I also don't believe Hutter's paper proved it to be a general trend (by 
some reasonable measure). 

Again, which I take to be agreement.

 But I wouldn't doubt it.

Depending upon what you mean by compression, I would strongly doubt it.  I 
believe that lossless compression is emphatically *not* part of higher 
intelligence in most real-world conditions and, in fact, that the gains 
provided by losing a lot of data makes a much higher intelligence possible 
with the same limited resources than an intelligence that is constrained by the 
requirement to not lose data.

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Thursday, November 16, 2006 2:17 PM
  Subject: Re: [agi] A question on the symbol-system hypothesis


  In the context of AIXI, intelligence is measured by an accumulated reward 
signal, and compression is defined by the size of a program (with respect to 
some fixed universal Turing machine) guessed by the agent that is consistent

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Richard Loosemore

Ben Goertzel wrote:

Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things.  Marcus Hutter and yourself are doing precisely that.

I rest my case.


Richard Loosemore


IMO these analogies are not fair.

The mathematical notion of a ring is not intended to capture
essential aspects of the commonsense notion of a ring.  It is merely
chosen because a certain ring-like-ness characterizes the mathematical
structure in question...

On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.

As Eric Baum noted, in his book What Is Thought? he did not in fact
define intelligence or understanding as compression, but rather made a
careful argument as to why he believes compression is an essential
aspect of intelligence and understanding.  You really have not
addressed his argument in your posts, IMO.


I think you are missing the nature of the point I was making.

I was saying that *because* (for independent reasons) these people's 
usage of terms like intelligence is so disconnected from commonsense 
usage (they idealize so extremely that the sense of the word no longer 
bears a reasonable connection to the original) *therefore* the situation 
is akin to the one that obtains for Model Theory or Rings.


I am saying that these folks are trying to have their cake and eat it 
too:  they idealize intelligence into something so disconnected from 
the real world usage that, really, they ought not to use the term, but 
should instead invent another one like ooblifience to describe the 
thing they are proving theorems about.


But then, having so distorted the meaning of the term, they go back and 
start talking about the conclusions they derived from their math as if 
those conclusions applied to the real world thing that in commonsense 
parlance we call intelligence.  At that point they are doing what I 
claimed a Model Theorist would be doing if she started talking about a 
kid's model airplane as if Model Theory applied to it.




Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread James Ratcliff
Sorry, meant lossy of course.

James

Mark Waser [EMAIL PROTECTED] wrote:  I  think that generaliziation via 
lossless compression could more readily be a  Requirement for an AGI.

 Human beings don't do lossless compression so  lossless compression clearly 
*isn't* a requirement.  Lossless compression  also clearly requires more 
resources than generalization where you are allowed  to lose some odd examples.
  
  Also I  must agree with Matt that you cant have knowledge seperate from 
  other knowledge,  everything is intertwined, and that is the problem.

 You're missing the point.  Yes, knowledge is  intertwined; however, look at 
how it works when humans argue/debate.   Knowledge is divided into a small 
number of concepts that both the humans  understand (although they may debate 
the truth value of the concepts).   Arguments are normally be easily resolved 
(even if the resolution is  agree to disagree) when the humans quickly reach 
the concepts at the root of  the pyramid supporting the concept under question 
-- and that pyramid is *never*  very large because humans simply don't work 
that way.  Take any debate  (even the truly fiery ones) and you'll find that 
the number of concepts involved  is *well* less than 100 (if it even reaches 
twenty).
  
  It is  very difficult to teach a computer something without it knowing ALL 
  other things  related to that, because then Some inference it tries to make 
  will be wrong,  regardless.
  
 But this is  *precisely* how children are taught.  You have to start somewhere 
and you  start by saying that certain concepts are just true (even though they 
may not  *always* be true) and that it's not worthwhile to examine the concepts 
 underneath them unless there's a *really* good reason.  The way in which  you 
and Matt are arguing, I need to always know *and* use General Relativity  even 
for things that are adequately handled by Newtonian Physics.  Yes,  there 
*will* be errors when you reach edge cases (very high speeds in the  Physics 
case) but there is *absolutely* no way to avoid this because you  virtually 
never know when you're going to wander over a phase change when you're  in the 
realm of new experiences.
  
  There  is Nothing, that I know, that humans know that is not in terms of 
  something  else, that is one thing that adds to the complexity of the 
  issue.   

 Yes, but I  believe that there *is* a reasonably effective cognitive closure 
that contains a  reasonably small number of concepts which can then apply 
external lookups and  learning for everything else that it needs.
  
  But that means  that an architecture for AI will have to have a method for 
  finding these  inconsistencies and correcting them with good  effeciency.

Yes!  Exactly and absolutely!  In fact, I  would almost argue that this is 
*all* that intelligence does . . .  .
 

  
  
- Original Message - 
   From:James Ratcliff
   To: agi@v2.listbox.com 
   Sent: Friday, November 17, 2006 9:13AM
   Subject: Re: [agi] A question on thesymbol-system hypothesis
   

I think that generaliziation via lossless compression couldmore readily be 
a Requirement for an AGI.

Also I must agree with Mattthat you cant have knowledge seperate from other 
knowledge, everything isintertwined, and that is the problem.
There is Nothing, that I know, thathumans know that is not in terms of 
something else, that is one thing thatadds to the complexity of the issue.  
It is very difficult to teach acomputer something without it knowing ALL 
other things related to that,because then Some inference it tries to make 
will be wrong,regardless.
  But that means that an architecture for AI will have tohave a method for 
finding these inconsistencies and correcting them with goodeffeciency.

James Ratcliff

Mark Waser[EMAIL PROTECTED] wrote:DIV {  MARGIN: 0px }
 I  don't believe it is true that better compression implies higher 
intelligence  (by these definitions) for every possible agent, environment, 
universal  Turing machine and pair of guessed programs. 
  
 Which I take to agree with my  point.
  
  I  also don't believe Hutter's paper proved it to be a general 
trend (by some  reasonable measure). 
  
 Again, which I take to be  agreement.
  
  But I wouldn't doubt it.

 Depending upon what you mean by compression, I  would strongly doubt 
it.  I believe that lossless compression is  emphatically *not* part of 
higher intelligence in most real-world conditions  and, in fact, that the 
gains provided by losing a lot of data makes a much  higher intelligence 
possible with the same limited resources than an  intelligence that is 
constrained by the requirement to not lose  data.
  
-Original Message - 
   From:MattMahoney 
   To:agi@v2.listbox.com

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Charles D Hixson

Ben Goertzel wrote:

...
On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.
...
Ben

Given that purpose, I propose the following definition:
A system understands a situation that it encounters if it predictably 
acts in such a way as to maximize the probability of achieving it's 
goals in that situation.


I'll grant that it's a bit fuzzy, but I believe that it captures the 
essence of the visible evidence of understanding.  This doesn't say what 
understanding is, merely how you can recognize it.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mike Dougherty

I'm not sure I follow every twist in this thread.  No... I'm sure I don't
follow every twist in this thread.

I have a question about this compression concept.  Compute the number of
pixels required to graph the Mandelbrot set at whatever detail you feel to
be a sufficient for the sake of example.  Now describe how this 'pattern' is
compressed.  Of course the ideal compression is something like 6 bytes.
Show me a 6 byte jpg of a mandelbrot  :)

Is there a concept of compression of an infinite series?  Or was the term
bounding being used to describe the attractor around which the values
tends to fall?  chaotic attractor, statistical median, etc.  they seem to be
describing the same tendency of human pattern recognition of different types
of data.

Is a 'symbol' an idea, or a handle on an idea?  Does this support the
mechanics of how concepts can be built from agreed-upon ideas to make a new
token we can exchange in communication that represents the sum of the
constituent ideas?   If this symbol-building process is used to communicate
ideas across a highly volatile link (from me to you) then how would these
symbols be used by a single computation machine?  (Is that a hard takeoff
situation, where the near zero latency turns into an exponential increase in
symbol complexity per unit time?)

If you could provide some feedback as a reality check on these thoughts, I'd
appreciate the clarification... thanks.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
Furthermore we learned in class recently about a case where a person was 
literally born with only half a brain, dont have that story but here is one:
http://abcnews.go.com/2020/Health/story?id=1951748page=1

I think all the talk about hard numbers is really off base unfortunatly and AI 
shouldnt be held to those kind of rigid standards, as we really dont know the 
minimum information required, and considering we will most likely have tons and 
tons of redundant information, we will have much much more than whatever amount 
is quoted, and will not know the magic number until it happens.  The point of 
it is the actual structure, and usage of the knowledge.
  Talking about knowledge in the raw has no real use.

My AI already has access to over 600 recent novels, but unfortunatly is not AGI.

Now, I personally, understand/comprehend ALL the novels.  Its pretty simple.  
Do I 'know' or have 'memorized' all the novels?  No.
But there is no reason a human cant comprehend much greater than 10^9 bits.
Your terminology really must be tightened up if you are to make a distinct 
strong point.  I have read nearly 1000 novels, and understood most of what I 
read.

Now if it was comprehend much greater than 10^9 bits of structured 
non-repeated knowledge  it may be true that humans can not understand that.
But, then again, wait, if it is structured, then it has form and patterns that 
can be manipulated, anything that has a pattern, makes it easier to 'know' or 
understand these things.

James Ratcliff


Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote:
 I will try to answer several posts here. I said that the knowledge
 base of an AGI must be opaque because it has 10^9 bits of information,
 which is more than a person can comprehend. By opaque, I mean that you
 can't do any better by examining or modifying the internal
 representation than you could by examining or modifying the training
 data. For a text based AI with natural language ability, the 10^9 bits
 of training data would be about a gigabyte of text, about 1000 books. Of
 course you can sample it, add to it, edit it, search it, run various
 tests on it, and so on. What you can't do is read, write, or know all of
 it. There is no internal representation that you could convert it to
 that would allow you to do these things, because you still have 10^9
 bits of information. It is a limitation of the human brain that it can't
 store more information than this.

Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.

A typical painting in the Louvre might be 1 meter on a side.  At roughly 
16 pixels per millimeter, and a perceivable color depth of about 20 bits 
that would be about 10^8 bits.  If an art specialist knew all about, 
say, 1000 paintings in the Louvre, that specialist would understand a 
total of about 10^11 bits.

You might be inclined to say that not all of those bits count, that many 
are redundant to understanding.

Exactly.

People can easily comprehend 10^9 bits.  It makes no sense to argue 
about degree of comprehension by quoting numbers of bits.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Sponsored Link

Mortgage rates near 39yr lows. $310,000 Mortgage for $999/mo -  Calculate new 
house payment

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
Mark Waser [EMAIL PROTECTED]
wrote:
 
So *prove* to me why information  theory forbids transparency of a knowledge 
base.


Isn't this pointless?  I mean, if I offer any proof you will just attack the 
assumptions.  Without assumptions, you can't even prove the universe exists.

I have already stated reasons why I believe this is true.  An AGI will have 
greater algorithmic complexity than the human brain (assumption).  Transparency 
implies that you can examine the knowledge base and deterministically predict 
its output given some input (assumption about the definition of transparency).  
Legg proved [1] that a Turing machine cannot predict another machine of greater 
algorithmic complexity.

Aside from that, I can only give examples as supporting evidence.
1. The relative success of statistical language learning (opaque) compared to 
structured knowledge, parsing, etc.
2. It would be (presumably) easier to explain human behavior by asking 
questions than by examining neurons (assuming we had the technology to do this).

In your argument for transparency, you assume that individual pieces of 
knowledge can be isolated.  Prove it.  In the brain, knowledge is distributed.  
We make decisions by integrating many sources of evidence from all parts of the 
brain.

[1] Legg, Shane, (2006), Is There an Elegant Universal Theory of
Prediction?,  Technical Report
IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle
Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.

http://www.vetta.org/documents/IDSIA-12-06-1.pdf



  

-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 

From: Mark Waser [EMAIL PROTECTED]

To: agi@v2.listbox.com

Sent: Thursday, November 16, 2006 9:57:40 AM

Subject: Re: [agi] A question on the symbol-system hypothesis



 The knowledge base has high complexity.   You can't debug it.  You 
can examine it and edit it but you can't verify  its correctness.

  

 While the knowledge base is complex, I disagree  with the way in which you're 
attempting to use the first sentence.   The knowledge base *isn't* so complex 
that it causes a truly insoluble  problem.  The true problem is that the 
knowledge base will have a large  enough size and will grow and change quickly 
enough that you can't maintain  100% control over the contents or even the 
integrity of it.

  

 I disagree with the second but believe that it may  just be your semantics 
because of the third sentence.  The question is what  we mean by debug.  If 
you mean remove all incorrect knowledge, then the  answer is obviously yes, we 
can't remove all incorrect knowledge because odd  sequences of observed events 
and incomplete knowledge means that globally  incorrect knowledge *is* the 
correct deduction from experience.  On the  other hand, we certainly should be 
able to debug how the knowledge base  operates, make sure that it maintains an 
acceptable degree of internal  integrity, and responds correctly when it 
detects a major integrity  problem.  The *process* and global behavior of the 
knowledge base is what  is important and it *can* be debugged.  Minor mistakes 
and errors are just  the cost of being limited in an erratic world.

  

  An AGI with a correct learning algorithm might  still behave badly.

  

 No!  An AGI with a correct learning algorithm  may, through an odd sequence of 
events and incomplete knowledge, come to an  incorrect conclusion and take an 
action that it would not have taken if it had  perfect knowledge -- BUT -- this 
is entirely correct behavior, not bad  behavior.  Calling it bad behavior 
dramatically obscures what you are  trying to do.

  

  You can't examine the knowledge base to find  out why. 

  

 No, no, no, no, NO!  If you (or the AI) can't  go back through the causal 
chain and explain exactly why an action was taken,  then you have created an 
unsafe AI.  A given action depends upon a small  part of the knowledge base 
(which may then depend upon ever larger sections in  an ongoing pyramid) and 
you can debug an action and see what lead to an action  (that you believe is 
incorrect but the AI believes is correct).

  

  You can't manipulate the knowledge base data  to fix it. 

  

 Bull.  You should be able to correctly come  across a piece of incorrect 
knowledge that lead to an incorrect decision.   You should be able to find the 
supporting knowledge structures.  If the  knowledge is truly incorrect, you 
should be able to provide evidence/experiences  to the AI that leads it to 
correct the incorrect knowledge (or, you could just  even just tack the correct 
knowledge in the knowledge base, fix it so that it  temporarily can't be 
altered, and run your integrity repair routines -- which, I  contend, any AI 
that is going to go anywhere must have).

  

  At least you can't do these things any better  than manipulating the inputs 
  and observing the outputs. 

  

 No.  I can find structures in the knowledge  base and alter them.  I would

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
I concur, there are just too many things wrong with these statements.

If your AI cant tell you on any level why its doing something, and you cant 
tell it not to do it, or do it in a different way, then you have a Programmed 
Machine, not an AI.

ALL programs are modified via changing the input to change the output, its just 
that we usually change the program or the data file, but in this case we would 
tell the AGI to change its behavior, the links or weights or however that is 
represented.

Google's algorithm can be understood in parts, and you can tweak a page to land 
you near the top.
And if google's the model of the AI, and it returns a wrong URL, a spam one, 
you can directly go in there, modify it, or tell it to modify itself, and 
correct the issue.

In humans, it is harder sometimes, but if a person turns left, you can ask and 
receive an answer.  The only times you cant is the vague things where we dont 
understand our decisions, like, why did you 'randomly' take this path instead 
of that one.

Understanding is a dum-dum word, it must be specifically defined as a concept 
or not used.  Understanding art is a Subjective question.  Everyone has their 
own 'interpretations' of what that means, either brush stokes, or style, or 
color, or period, or content, or inner meaning.
But you CANT measure understanding of an object internally like that. There 
MUST be an external measure of understanding.
  If you ask me if I understand the painting, and I say yes, do I really 
understand it.  No of course not, I just know how to answer a yes or no 
question.
You must ask details, or I must do something.  I MUST explain or act.
If it is a door or another object I see, the test of understanding is how I 
interact,  I 'know' that I can open the door shut the door, throw the ball, so 
it is said that I 'understand' the door object, to a degree.  It is also a 
sliding scale of understanding.  If I understand how the door works, how it is 
built, what I can do with it, all different styles of doors, then I am said to 
'understand' more.  But there is no decent question you can put that is just 
plainly:  
  Do you understand X?

James

James

Mark Waser [EMAIL PROTECTED] wrote:  It keeps a copy of the searchable part 
of the Internet in RAM

Sometimes I wonder why I argue with you when you throw around statements 
like this that are this massively incorrect.  Would you care to retract 
this?

 You could, in principle, model the Google server in a more powerful 
 machine and use it to predict the result of a search

What is this model the Google server BS?  Google search results are a 
*rat-simple* database query.  Building the database involves a much more 
sophisticated algorithm but it's results are *entirely* predictable if you 
know the order of the sites that are going to be imported.  There is *NO* 
mystery or magic here.  It is all eminently debuggable if you know the 
initial conditions.

 My point about AGI is that constructing an internal representation that 
 allows debugging the learned knowledge is pointless.

Huh? This is absolutely ridiculous.  If the learned knowledge can't be 
debugged (either by you or by the AGI) then it's going to be *a lot* more 
difficult to unlearn/correct incorrect knowledge.  How can that possibly be 
pointless?  Not to mention the fact that teaching knowledge to others is 
much easier . . . .

 A more powerful AGI could do it, but you can't.

Why can't I -- particularly if I were given infinite time (or even a 
moderately decent set of tools)?

 You can't do any better than to manipulate the input and observe the 
 output.

This is absolute and total BS and last two sentences in your e-mail (If you 
tell your robot to do something and it sits in a corner instead, you can't 
do any better than to ask it why, hope for a sensible answer, and retrain 
it.  Trying to debug the reasoning for its behavior would be like trying to 
understand why a driver made a left turn by examining the neural firing 
patterns in the driver's brain.) are even worse.  The human brain *is* 
relatively opaque in it's operation but there is no good reason that I know 
of why this is advantageous and *many* reasons why it is disadvantageous --  
and I know of no reasons why opacity is required for intelligence.


- Original Message - 
From: Matt Mahoney 
To: 
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Sorry if I did not make clear the distinction between knowing the learning 
algorithm for AGI (which we can do) and knowing what was learned (which we 
can't).

My point about Google is to illustrate that distinction.  The Google 
database is about 10^14 bits.  (It keeps a copy of the searchable part of 
the Internet in RAM).  The algorithm is deterministic.  You could, in 
principle, model the Google server in a more powerful machine and use it to 
predict the result of a search.  But where does this get you?  You can't

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
1. The fact that AIXI is intractable is not relevant to the proof that 
compression = intelligence, any more than the fact that AIXI is not computable. 
 In fact it is supporting because it says that both are hard problems, in 
agreement with observation.

Wrong.  Compression may (and, I might even be willing to admit, does) equal 
intelligence under the conditions of perfect and total knowledge.  It is my 
contention, however, that without those conditions that compression does not 
equal intelligence and AIXI does absolutely nothing to disprove my contention 
since it assumes (and requires) those conditions -- which emphatically do not 
exist.

2. Do not confuse the two compressions.  AIXI proves that the optimal behavior 
of a goal seeking agent is to guess the shortest program consistent with its 
interaction with the environment so far.  This is lossless compression.  A 
typical implementation is to perform some pattern recognition on the inputs to 
identify features that are useful for prediction.  We sometimes call this 
lossy compression because we are discarding irrelevant data.  If we 
anthropomorphise the agent, then we say that we are replacing the input with 
perceptually indistinguishable data, which is what we typically do when we 
compress video or sound.

I haven't confused anything.  Under perfect conditions, and only under perfect 
conditions, does AIXI prove anything.  You don't have perfect conditions so 
AIXI proves absolutely nothing.

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 7:20 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


1. The fact that AIXI^tl is intractable is not relevant to the proof that 
compression = intelligence, any more than the fact that AIXI is not computable. 
 In fact it is supporting because it says that both are hard problems, in 
agreement with observation.

2. Do not confuse the two compressions.  AIXI proves that the optimal behavior 
of a goal seeking agent is to guess the shortest program consistent with its 
interaction with the environment so far.  This is lossless compression.  A 
typical implementation is to perform some pattern recognition on the inputs to 
identify features that are useful for prediction.  We sometimes call this 
lossy compression because we are discarding irrelevant data.  If we 
anthropomorphise the agent, then we say that we are replacing the input with 
perceptually indistinguishable data, which is what we typically do when we 
compress video or sound.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

 The connection between intelligence and compression is not obvious.

The connection between intelligence and compression *is* obvious -- but 
compression, particularly lossless compression, is clearly *NOT* 
intelligence.

Intelligence compresses knowledge to ever simpler rules because that is an 
effective way of dealing with the world.  Discarding ineffective/unnecessary 
knowledge to make way for more effective/necessary knowledge is an effective 
way of dealing with the world.  Blindly maintaining *all* knowledge at 
tremendous costs is *not* an effective way of dealing with the world (i.e. 
it is *not* intelligent).

1. What Hutter proved is that the optimal behavior of an agent is to guess 
that the environment is controlled by the shortest program that is 
consistent with all of the interaction observed so far.  The problem of 
finding this program known as AIXI.
 2. The general problem is not computable [11], although Hutter proved 
 that if we assume time bounds t and space bounds l on the environment, 
 then this restricted problem, known as AIXItl, can be solved in O(t2l) 
 time

Very nice -- except that O(t2l) time is basically equivalent to incomputable 
for any real scenario.  Hutter's proof is useless because it relies upon the 
assumption that you have adequate resources (i.e. time) to calculate AIXI --  
which you *clearly* do not.  And like any other proof, once you invalidate 
the assumptions, the proof becomes equally invalid.  Except as an 
interesting but unobtainable edge case, why do you believe that Hutter has 
any relevance at all?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Richard, what is your definition of understanding?  How would you test 
whether a person understands art?

Turing offered a behavioral test for intelligence.  My understanding of 
understanding is that it is something that requires intelligence.  The 
connection between intelligence and compression is not obvious.  I have 
summarized the arguments here.
http://cs.fit.edu/~mmahoney/compression

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
Isn't this pointless?  I mean, if I offer any proof you will just attack 
the assumptions.  Without assumptions, you can't even prove the universe 
exists.


Just come up with decent assumptions that I'm willing to believe are likely. 
I'm not attacking your assumptions just to be argumentative, I'm questioning 
them because I believe that they are the root cause of your erroneous 
knowledge.


An AGI will have greater algorithmic complexity than the human brain 
(assumption).


For example, it's very worthwhile to have you spell out something like this. 
I don't believe that the AGI will have greater algorithmic complexity than 
the human brain.  It is my belief that after a certain point that *any* 
intelligence can import and use any algorithm at need (given sufficient 
time).  Thus Legg's proof is irrelevant since any human given sufficient 
time and knowledge will have sufficient algorithmic complexity to unravel 
any AGI and any AGI given sufficient time and knowledge will have sufficient 
algorithmic complexity to unravel any human.  In the case of an AI 
unraveling a human, however, I believe that the algorithmic complexity of a 
human being is so ridiculously high (because each neuron is unique and 
physically operates differently) that without being able to model down to 
the lowest physical level that there *isn't* a level with lower algorithmic 
complexity (i.e. low enough to be able for the AI to match).  On the other 
hand, I believe that the algorithmic complexity of an AGI can and will be 
much lower.


Of course, I can't *prove* that last sentence but I can try to persuade you 
that it is true by asking questions like:
1.  Do you really believe that an average human requires more than a million 
algorithms/recipes to do things (as opposed to a million applications of 
algorithms to different data which is clearly a ridiculously low number)?
2.  So -- How fast do humans learn algorithms and how many do they start 
with hard-coded into the genome?


In your argument for transparency, you assume that individual pieces of 
knowledge can be isolated.  Prove it.


Yes, I do make that assumption but you've tacitly granted me that assumption 
several times.  Give me a counter-example of knowledge that can't be 
isolated.  My proof is that humans who truly possess a piece of knowledge 
can always explain it (even if the explanation is only I've always seen it 
happen that way).  This is not the native representation of the knowledge 
(neural networks) but is, nonetheless, a valid and transparent (and 
isolated) representation.


In the brain, knowledge is distributed.  We make decisions by integrating 
many sources of evidence from all parts of the brain.


Yes.  Neural networks do not isolate knowledge -- but that is a feature of 
the networks, not the knowledge.  I believe that *all* knowledge (or, at 
least, the knowledge required to reach AGI-level intelligence can be 
isolated/have their reasoning explained).


- - - -

To claim that knowledge can't be isolated is to claim that there is 
knowledge that cannot be explained.


Do you want to
a) disagree with the above statement,
b) show me knowledge which can't be explained,
c) show why the statement is irrelevant, or
d) concede the point?



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 11:52 AM
Subject: Re: [agi] A question on the symbol-system hypothesis


Mark Waser [EMAIL PROTECTED]
wrote:


So *prove* to me why information  theory forbids transparency of a knowledge 
base.



Isn't this pointless?  I mean, if I offer any proof you will just attack the 
assumptions.  Without assumptions, you can't even prove the universe exists.


I have already stated reasons why I believe this is true.  An AGI will have 
greater algorithmic complexity than the human brain (assumption). 
Transparency implies that you can examine the knowledge base and 
deterministically predict its output given some input (assumption about the 
definition of transparency).  Legg proved [1] that a Turing machine cannot 
predict another machine of greater algorithmic complexity.


Aside from that, I can only give examples as supporting evidence.
1. The relative success of statistical language learning (opaque) compared 
to structured knowledge, parsing, etc.
2. It would be (presumably) easier to explain human behavior by asking 
questions than by examining neurons (assuming we had the technology to do 
this).


In your argument for transparency, you assume that individual pieces of 
knowledge can be isolated.  Prove it.  In the brain, knowledge is 
distributed.  We make decisions by integrating many sources of evidence from 
all parts of the brain.


[1] Legg, Shane, (2006), Is There an Elegant Universal Theory of
Prediction?,  Technical Report
IDSIA-12-06, IDSIA / USI-SUPSI, Dalle Molle
Institute for Artificial Intelligence, Galleria 2, 6928 Manno, Switzerland.

http://www.vetta.org/documents

Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread James Ratcliff
The main first subtitle: 
Compression is Equivalent to General IntelligenceUnless your definition of 
Compression is not the simple large amount of text turning into the small 
amount of text.
And likewise with General Intelligence.
I dont think under any of the many many definitions I have seen or created, 
that text or a compress thing can possibly be considered general 
intelligence.
Another way: data != knowledge != intelligence

Intelligence requires something else.  I would say an actor.

Now I would agree that a highly compressed, lossless data could represent a 
good knowledge base.  Yeah that goes good.

But quite simply, a lossy one provides a Better knowledge base, with two 
examples:
1. Poison ivy causes an itching rash for most people
poison oak: The common effect is an irritating, itchy rash.
Can be generalized or combined to:
poison oak and poison ivy cause an itchy rash.
Which is shorter, and lossy yet better for this fact.
2. If I see something in the road with four legs, and Im about to run it over, 
if I only have rules that say if a deer or dog runs in the road, dont hit it.
Then I cant correctly act, because I only know there is something with 4 legs 
in the road.  
However, if I have a generalized rule in my mind that says 
If something with four legs is in the road, avoid it, then I have a better 
rule.
This better rule cannot be gathered without generalization, and we have to have 
lots of generalization.

The generalizations can be invalidated with exceptions, and we do it all the 
time, thats how we can tell not to pet a skunk instead of a cat.

James Ratcliff


Matt Mahoney [EMAIL PROTECTED] wrote: Richard Loosemore  wrote:
 5) I have looked at your paper and my feelings are exactly the same as 
 Mark's  theorems developed on erroneous assumptions are worthless.

Which assumptions are erroneous?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore 
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard, what is your definition of understanding?  How would you test 
 whether a person understands art?
 
 Turing offered a behavioral test for intelligence.  My understanding of 
 understanding is that it is something that requires intelligence.  The 
 connection between intelligence and compression is not obvious.  I have 
 summarized the arguments here.
 http://cs.fit.edu/~mmahoney/compression/rationale.html

1) There will probably never be a compact definition of understanding. 
  Nevertheless, it is possible for us (being understanding systems) to 
know some of its features.  I could produce a shopping list of typical 
features of understanding, but that would not be the same as a 
definition, so I will not.  See my paper in the forthcoming proceedings 
of the 2006 AGIRI workshop, for arguments.  (I will make a version of 
this available this week, after final revisions).

3) One tiny, almost-too-obvious-to-be-worth-stating fact about 
understanding is that it compresses information in order to do its job.

4) To mistake this tiny little facet of understanding for the whole is 
to say that a hurricane IS rotation, rather than that rotation is a 
facet of what a hurricane is.

5) I have looked at your paper and my feelings are exactly the same as 
Mark's  theorems developed on erroneous assumptions are worthless.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Sponsored Link

   Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. 
Intro-*Terms

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Richard Loosemore

Matt Mahoney wrote:

Richard Loosemore [EMAIL PROTECTED] wrote:
5) I have looked at your paper and my feelings are exactly the same as 
Mark's  theorems developed on erroneous assumptions are worthless.


Which assumptions are erroneous?


Marcus Hutter's work is about abstract idealizations of the process of 
intelligence that are strictly beyond the bounds of computability in the 
universe we actually live in.


The erroneous assumptions I spoke of are centered on his (and your) 
misappropriation of words that already have other meanings (like 
intelligence, behavior, optimal behavior, goal, agent, observation, and 
so on):  basically, if he were to claim to be proving mathematical facts 
about entities that had nothing to do with the world, I would not fault 
him, but he and you attach words to some of the mathematical constructs 
that already have other meanings.  That identification of terms is a 
false assumption.


What happens after that is that you start to deploy the conclusions 
derived from the math AS IF THEY APPLIED TO THE ORIGINAL MEANINGS OF THE 
APPROPRIATED TERMS.  So in that sense, you are basing your conclusions 
on erroneous assumptions.


You know about the mathematical field called Model Theory?  You know 
about the mathematical concept of a Ring?  Try walking around a toy 
store talking about the airplane models as if they were the same as the 
models in model theory.  Try walking around a jewelers and coming to 
conclusions about the engagement rings as if they were instances of 
mathematical rings.


That would be stupid.

Rings and Models are appropriated terms, but the mathematicians 
involved would never be so stupid as to confuse them with the real 
things.  Marcus Hutter and yourself are doing precisely that.


I rest my case.


Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Ben Goertzel

Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things.  Marcus Hutter and yourself are doing precisely that.

I rest my case.


Richard Loosemore


IMO these analogies are not fair.

The mathematical notion of a ring is not intended to capture
essential aspects of the commonsense notion of a ring.  It is merely
chosen because a certain ring-like-ness characterizes the mathematical
structure in question...

On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.

As Eric Baum noted, in his book What Is Thought? he did not in fact
define intelligence or understanding as compression, but rather made a
careful argument as to why he believes compression is an essential
aspect of intelligence and understanding.  You really have not
addressed his argument in your posts, IMO.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
I consider the last question in each of your examples to be unreasonable 
(though for very different reasons).


In the first case, What do you see? is a nonsensical and unnecessary 
extension on a rational chain of logic.  The visual subsystem, which is not 
part of the AGI, has reported something and, unless there is a good reason 
not to, the AGI should believe it as a valid fact and the root of a 
knowledge chain.  Extending past this point to ask a spurious, open question 
is silly.  Doing so is entirely unnecessary.  This knowledge chain is 
isolated.


In the second case, I don't know why you're doing any sort of search 
(particularly since there wasn't any sort of question preceding it).  The AI 
needed gas, it found a gas station, and it headed for it.  You asked why it 
waited til a given time and it told you.  How is this not isolated?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Mark Waser [EMAIL PROTECTED] wrote:

Give me a counter-example of knowledge that can't be isolated.


Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on E.
Q. How do you know?
A. Because I can see it.
Q. What do you see?
(depth first search)

Q. Why did you turn left here?
A. Because I need gas.
Q. Why did you turn left *here*?
A. Because there is a gas station.
Q. Why did you turn left now?
A. Because there is an opening in the traffic.
(breadth first search)

It's not that we can't do it in theory.  It's that we can't do it in 
practice.  The human brain is not a Turing machine.  It has finite time and 
memory limits.


-- Matt Mahoney, [EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
My point is that humans make decisions based on millions of facts, and we do 
this every second.  Every fact depends on other facts.  The chain of reasoning 
covers the entire knowledge base.

I said millions, but we really don't know.  This is an important number.  
Historically we have tended to underestimate it.  If the number is small, then 
we *can* follow the reasoning, make changes to the knowledge base and predict 
the outcome (provided the representation is transparent and accessible through 
a formal language).  But this leads us down a false path.

We are not so smart that we can build a machine smarter than us, and still be 
smarter than it.  Either the AGI has more algorithmic complexity than you do, 
or it has less.  If it has less, then you have failed.  If it has more, and you 
try to explore the chain of reasoning, you will exhaust the memory in your 
brain before you finish.

 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

I consider the last question in each of your examples to be unreasonable 
(though for very different reasons).

In the first case, What do you see? is a nonsensical and unnecessary 
extension on a rational chain of logic.  The visual subsystem, which is not 
part of the AGI, has reported something and, unless there is a good reason 
not to, the AGI should believe it as a valid fact and the root of a 
knowledge chain.  Extending past this point to ask a spurious, open question 
is silly.  Doing so is entirely unnecessary.  This knowledge chain is 
isolated.

In the second case, I don't know why you're doing any sort of search 
(particularly since there wasn't any sort of question preceding it).  The AI 
needed gas, it found a gas station, and it headed for it.  You asked why it 
waited til a given time and it told you.  How is this not isolated?

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Mark Waser [EMAIL PROTECTED] wrote:
Give me a counter-example of knowledge that can't be isolated.

Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on E.
Q. How do you know?
A. Because I can see it.
Q. What do you see?
(depth first search)

Q. Why did you turn left here?
A. Because I need gas.
Q. Why did you turn left *here*?
A. Because there is a gas station.
Q. Why did you turn left now?
A. Because there is an opening in the traffic.
(breadth first search)

It's not that we can't do it in theory.  It's that we can't do it in 
practice.  The human brain is not a Turing machine.  It has finite time and 
memory limits.

-- Matt Mahoney, [EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Matt Mahoney
Again, do not confuse the two compressions.

In paq8f (on which paq8hp5 is based) I use lossy pattern recognition (like you 
describe, but at a lower level) to extract features to use as context for text 
prediction.  The lossless compression is used to evaluate the quality of the 
prediction.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 1:41:41 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

The main first subtitle: 
Compression is Equivalent to General IntelligenceUnless your definition of 
Compression is not the simple large amount of text turning into the small 
amount of text.
And likewise with General Intelligence.
I dont think under any of the many many definitions I have seen or created, 
that text or a compress thing can possibly be considered general 
intelligence.
Another way: data != knowledge != intelligence

Intelligence requires something else.  I would say an actor.

Now I would agree that a highly compressed, lossless data could represent a 
good knowledge base.  Yeah that goes good.

But quite simply, a lossy one provides a Better knowledge base, with two 
examples:
1. Poison ivy causes an itching rash for most people
poison oak: The common effect is an irritating, itchy rash.
Can be generalized or combined to:
poison
 oak and poison ivy cause an itchy rash.
Which is shorter, and lossy yet better for this fact.
2. If I see something in the road with four legs, and Im about to run it over, 
if I only have rules that say if a deer or dog runs in the road, dont hit it.
Then I cant correctly act, because I only know there is something with 4 legs 
in the road.  
However, if I have a generalized rule in my mind that says 
If something with four legs is in the road, avoid it, then I have a better 
rule.
This better rule cannot be gathered without generalization, and we have to have 
lots of generalization.

The generalizations can be invalidated with exceptions, and we do it all the 
time, thats how we can tell not to pet a skunk instead of a cat.

James Ratcliff


Matt Mahoney [EMAIL PROTECTED] wrote: Richard
 Loosemore  wrote:
 5) I have looked at your paper and my feelings are exactly the same as 
 Mark's  theorems developed on erroneous assumptions are worthless.

Which assumptions are erroneous?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore 
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard, what is your definition of understanding?  How would you test 
 whether a person understands art?
 
 Turing offered a behavioral test for intelligence.  My understanding of 
 understanding is that it is something that requires intelligence.  The 
 connection between intelligence and compression is not obvious.  I have 
 summarized the arguments here.
 http://cs.fit.edu/~mmahoney/compression/rationale.html

1)
 There will probably never be a compact definition of understanding. 
  Nevertheless, it is possible for us (being understanding systems) to 
know some of its features.  I could produce a shopping list of typical 
features of understanding, but that would not be the same as a 
definition, so I will not.  See my paper in the forthcoming proceedings 
of the 2006 AGIRI workshop, for arguments.  (I will make a version of 
this available this week, after final revisions).

3) One tiny, almost-too-obvious-to-be-worth-stating fact about 
understanding is that it compresses information in order to do its job.

4) To mistake this tiny little facet of understanding for the whole is 
to say that a hurricane IS rotation, rather than that rotation is a 
facet of what a hurricane is.

5) I have looked at your paper and my feelings are exactly the same as 
Mark's  theorems developed on erroneous assumptions are
 worthless.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php 

Sponsored Link

   
Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. Intro-*Terms

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-16 Thread Mark Waser
My point is that humans make decisions based on millions of facts, and we 
do this every second.


Not!  Humans make decisions based upon a very small number of pieces of 
knowledge (possibly compiled from large numbers of *very* redundant data). 
Further, these facts are generally arranged somewhat pyramidally.  Humans do 
not consider *anything* other than this small number of facts.


Every fact depends on other facts.  The chain of reasoning covers the 
entire knowledge base.


True but entirely irrelevant.  Living in the world dictates the vast, vast 
majority of that knowledge base.  If that were not true, we could not 
communicate with one another the way in which we do.  For the purposes of 
human decision-making, I would argue that *at most* humans use the facts 
that they subsequently use to justify their decision.


I said millions, but we really don't know.  This is an important 
number.  Historically we have tended to underestimate it.


Assuming now that you mean facts (and not algorithms) that we need in our 
knowledge base, I don't have any problem with this number.


If the number is small, then we *can* follow the reasoning, make changes 
to the knowledge base and predict the outcome (provided the 
representation is transparent and accessible through a formal language).


And even if the number is very large, then we *can* follow the reasoning, 
make changes to the knowledge base and predict the outcome (provided the 
representation is transparent and accessible).



But this leads us down a false path.


How so?  The problem with previous systems is that they were small and then 
expected to correctly be able generalize to cases that it was unreasonable 
to expect them to cover.  And, in particular, I believe that no one has yet 
approached the number and breadth of algorithms/methods that you need to 
have for a general intelligence -- particularly since I hesitate to believe 
that there is a system with more than 100 truly different algorithms 
(meaning separately coded and not automatically generated from underlying 
algorithms and data).


We are not so smart that we can build a machine smarter than us, and 
still be smarter than it.


Smart is not equivalent to algorithmic complexity and this is a 
nonsensically nasty and incorrect rephrasing to a paradox solely designed to 
win an argument.  Try to keep civil, will you?


Either the AGI has more algorithmic complexity than you do, or it has 
less.


Wrong.  It has exactly the same algorithmic complexity (i.e. it can build to 
any necessary arbitrary value as can any human).  Now what does that do to 
your arguments?



you will exhaust the memory in your brain before you finish


Huh?  Aren't I allowed writing? Computers?  I have effectively infinite 
memory (when you consider how much I can actually use at one time).  Don't 
you?






- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:51 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


My point is that humans make decisions based on millions of facts, and we do 
this every second.  Every fact depends on other facts.  The chain of 
reasoning covers the entire knowledge base.


I said millions, but we really don't know.  This is an important number. 
Historically we have tended to underestimate it.  If the number is small, 
then we *can* follow the reasoning, make changes to the knowledge base and 
predict the outcome (provided the representation is transparent and 
accessible through a formal language).  But this leads us down a false path.


We are not so smart that we can build a machine smarter than us, and still 
be smarter than it.  Either the AGI has more algorithmic complexity than you 
do, or it has less.  If it has less, then you have failed.  If it has more, 
and you try to explore the chain of reasoning, you will exhaust the memory 
in your brain before you finish.



-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

I consider the last question in each of your examples to be unreasonable
(though for very different reasons).

In the first case, What do you see? is a nonsensical and unnecessary
extension on a rational chain of logic.  The visual subsystem, which is not
part of the AGI, has reported something and, unless there is a good reason
not to, the AGI should believe it as a valid fact and the root of a
knowledge chain.  Extending past this point to ask a spurious, open question
is silly.  Doing so is entirely unnecessary.  This knowledge chain is
isolated.

In the second case, I don't know why you're doing any sort of search
(particularly since there wasn't any sort of question preceding it).  The AI
needed gas, it found a gas station, and it headed for it.  You asked why it
waited til a given

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore

Matt Mahoney wrote:

I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
representation than you could by examining or modifying the training
data. For a text based AI with natural language ability, the 10^9 bits
of training data would be about a gigabyte of text, about 1000 books. Of
course you can sample it, add to it, edit it, search it, run various
tests on it, and so on. What you can't do is read, write, or know all of
it. There is no internal representation that you could convert it to
that would allow you to do these things, because you still have 10^9
bits of information. It is a limitation of the human brain that it can't
store more information than this.


Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.


A typical painting in the Louvre might be 1 meter on a side.  At roughly 
16 pixels per millimeter, and a perceivable color depth of about 20 bits 
that would be about 10^8 bits.  If an art specialist knew all about, 
say, 1000 paintings in the Louvre, that specialist would understand a 
total of about 10^11 bits.


You might be inclined to say that not all of those bits count, that many 
are redundant to understanding.


Exactly.

People can easily comprehend 10^9 bits.  It makes no sense to argue 
about degree of comprehension by quoting numbers of bits.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser

Mark Waser wrote:
  Given sufficient time, anything  should be able to be understood and 
debugged.

Give me *one* counter-example to  the above . . . .

Matt Mahoney replied:
Google.  You cannot predict the results of a search.  It does not help 
that you have full access to the Internet.  It would not help even if 
Google gave you full access to their server.


This is simply not correct.  Google uses a single non-random algorithm 
against a database to determine what results it returns.  As long as you 
don't update the database, the same query will return the exact same results 
and, with knowledge of the algorithm, looking at the database manually will 
also return the exact same results.


Full access to the Internet is a red herring.  Access to Google's database 
at the time of the query will give the exact precise answer.  This is also, 
exactly analogous to an AGI since access to the AGI's internal state will 
explain the AGI's decision (with appropriate caveats for systems that 
deliberately introduce randomness -- i.e. when the probability is 60/40, the 
AGI flips a weighted coin -- but in even those cases, the answer will still 
be of the form that the AGI ended up with a 60% probability of X and 40% 
probability of Y and the weighted coin landed on the 40% side).


When we build AGI, we will understand it the way we understand Google. 
We know how a search engine works.  We will understand how learning 
works.  But we will not be able to predict or control what we build, even 
if we poke inside.


I agree with your first three statements but again, the fourth is simply not 
correct (as well as a blatant invitation to UFAI).  Google currently 
exercises numerous forms of control over their search engine.  It is known 
that they do successfully exclude sites (for visibly trying to game 
PageRank, etc.).  They constantly tweak their algorithms to change/improve 
the behavior and results.  Note also that there is a huge difference between 
saying that something is/can be exactly controlled (or able to be exactly 
predicted without knowing it's exact internal state) and that something's 
behavior is bounded (i.e. that you can be sure that something *won't* 
happen -- like all of the air in a room suddenly deciding to occupy only 
half the room).  No complex and immense system is precisely controlled but 
many complex and immense systems are easily bounded.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


I will try to answer several posts here.  I said that the knowledge base of 
an AGI must be opaque because it has 10^9 bits of information, which is more 
than a person can comprehend.  By opaque, I mean that you can't do any 
better by examining or modifying the internal representation than you could 
by examining or modifying the training data.  For a text based AI with 
natural language ability, the 10^9 bits of training data would be about a 
gigabyte of text, about 1000 books.  Of course you can sample it, add to it, 
edit it, search it, run various tests on it, and so on.  What you can't do 
is read, write, or know all of it.  There is no internal representation that 
you could convert it to that would allow you to do these things, because you 
still have 10^9 bits of information.  It is a limitation of the human brain 
that it can't store more information than this.


It doesn't matter if you agree with the number 10^9 or not.  Whatever the 
number, either the AGI stores less information than the brain, in which case 
it is not AGI, or it stores more, in which case you can't know everything it 
does.



Mark Waser wrote:

I certainly don't buy the mystical approach that says that  sufficiently 
large neural nets will come up with sufficiently complex  discoveries that 
we can't understand them.




James Ratcliff wrote:

Having looked at the nueral network type AI algorithms, I dont see any 
fathomable way that that type of architecture could

create a full AGI by itself.




Nobody has created an AGI yet.  Currently the only working model of 
intelligence we have is based on neural networks.  Just because we can't 
understand it doesn't mean it is wrong.


James Ratcliff wrote:


Also it is a critical task for expert systems to explain why they are

doing what they are doing, and for business application,

I for one am

not goign to blindy trust what the AI says, without a little background.

I expect this ability to be part of a natural language model.  However, any 
explanation will be based on the language model, not the internal workings 
of the knowledge representation.  That remains opaque.  For example:


Q: Why did you turn left here?
A: Because I need gas.

There is no need to explain that there is an opening in the traffic, that 
you can see a place where you can turn left without going off the road, that 
the gas gauge reads E

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser

Matt,

I would also note that you continue not to understand the difference between 
knowledge and data and contend that your 10^9 number is both entirely 
spurious and incorrect besides.  I've read many times 1,000 books.  I retain 
the vast majority of the *knowledge* in those books.  I can't reproduce 
those books word for word by memory but that's not what intelligence is 
about AT ALL.


 It doesn't matter if you agree with the number 10^9 or not.  Whatever the 
number, either the AGI stores less information than the brain, in which 
case it is not AGI, or it stores more, in which case you can't know 
everything it does.


Information storage also has absolutely nothing to do with AGI (other than 
the fact that there probably is a minimum below which AGI can't fit).  I 
know that my brain has far more information than is necessary for AGI (so 
the first part of your last statement is wrong).  Further, I don't need to 
store everything that you know -- particularly if I have access to outside 
resources.  My brain doesn't store all of the information in a phone book 
yet, effectively, I have total use of all of that information.  Similarly, 
an AGI doesn't need to store 100% of the information that it uses.  It 
simply needs to know where to find it upon need and how to use it.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


I will try to answer several posts here.  I said that the knowledge base of 
an AGI must be opaque because it has 10^9 bits of information, which is more 
than a person can comprehend.  By opaque, I mean that you can't do any 
better by examining or modifying the internal representation than you could 
by examining or modifying the training data.  For a text based AI with 
natural language ability, the 10^9 bits of training data would be about a 
gigabyte of text, about 1000 books.  Of course you can sample it, add to it, 
edit it, search it, run various tests on it, and so on.  What you can't do 
is read, write, or know all of it.  There is no internal representation that 
you could convert it to that would allow you to do these things, because you 
still have 10^9 bits of information.  It is a limitation of the human brain 
that it can't store more information than this.


It doesn't matter if you agree with the number 10^9 or not.  Whatever the 
number, either the AGI stores less information than the brain, in which case 
it is not AGI, or it stores more, in which case you can't know everything it 
does.



Mark Waser wrote:

I certainly don't buy the mystical approach that says that  sufficiently 
large neural nets will come up with sufficiently complex  discoveries that 
we can't understand them.




James Ratcliff wrote:

Having looked at the nueral network type AI algorithms, I dont see any 
fathomable way that that type of architecture could

create a full AGI by itself.




Nobody has created an AGI yet.  Currently the only working model of 
intelligence we have is based on neural networks.  Just because we can't 
understand it doesn't mean it is wrong.


James Ratcliff wrote:


Also it is a critical task for expert systems to explain why they are

doing what they are doing, and for business application,

I for one am

not goign to blindy trust what the AI says, without a little background.

I expect this ability to be part of a natural language model.  However, any 
explanation will be based on the language model, not the internal workings 
of the knowledge representation.  That remains opaque.  For example:


Q: Why did you turn left here?
A: Because I need gas.

There is no need to explain that there is an opening in the traffic, that 
you can see a place where you can turn left without going off the road, that 
the gas gauge reads E, and that you learned that turning the steering 
wheel counterclockwise makes the car turn left, even though all of this is 
part of the thought process.  The language model is responsible for knowing 
that you already know this.  There is no need either (or even the ability) 
to explain the sequence of neuron firings from your eyes to your arm 
muscles.


and this is one of the requirements for the Project Halo contest (took and 
passed the AP chemistry exam)

http://www.projecthalo.com/halotempl.asp?cid=30


This is a perfect example of why a transparent KR does not scale.  The 
expert system described was coded from 70 pages of a chemistry textbook in 
28 person-months.  Assuming 1K bits per page, this is a rate of 4 minutes 
per bit, or 2500 times slower than transmitting the same knowledge as 
natural language.


Mark Waser wrote:
  Given sufficient time, anything  should be able to be understood and 
debugged.

...

Give me *one* counter-example to  the above . . . .



Google.  You cannot predict the results of a search.  It does not help that 
you have full access

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Richard Loosemore [EMAIL PROTECTED] wrote:
Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.

That is true.  Understanding n bits is the same as compressing some larger 
training set that has an algorithmic complexity of n bits.  Once you have done 
this, you can use your probability model to make predictions about unseen data 
generated by the same (unknown) Turing machine as the training data.  The 
closer to n you can compress, the better your predictions will be.

I am not sure what it means to understand a painting, but let's say that you 
understand art if you can identify the artists of paintings you haven't seen 
before with better accuracy than random guessing.  The relevant quantity of 
information is not the number of pixels and resolution, which depend on the 
limits of the eye, but the (much smaller) number of features that the high 
level perceptual centers of the brain are capable of distinguishing and storing 
in memory.  (Experiments by Standing and Landauer suggest it is a few bits per 
second for long term memory, the same rate as language).  Then you guess the 
shortest program that generates a list of feature-artist pairs consistent with 
your knowledge of art and use it to predict artists given new features.

My estimate of 10^9 bits for a language model is based on 4 lines of evidence, 
one of which is the amount of language you process in a lifetime.  This is a 
rough estimate of course.  I estimate 1 GB (8 x 10^9 bits) compressed to 1 bpc 
(Shannon) and assume you remember a significant fraction of that.

Landauer, Tom (1986), “How much do people
remember?  Some estimates of the quantity
of learned information in long term memory”, Cognitive Science (10) pp. 477-493

Shannon, Cluade E. (1950), “Prediction and
Entropy of Printed English”, Bell Sys. Tech. J (3) p. 50-64.  

Standing, L. (1973), “Learning 10,000 Pictures”,
Quarterly Journal of Experimental Psychology (25) pp. 207-222.



-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:33:04 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 I will try to answer several posts here. I said that the knowledge
 base of an AGI must be opaque because it has 10^9 bits of information,
 which is more than a person can comprehend. By opaque, I mean that you
 can't do any better by examining or modifying the internal
 representation than you could by examining or modifying the training
 data. For a text based AI with natural language ability, the 10^9 bits
 of training data would be about a gigabyte of text, about 1000 books. Of
 course you can sample it, add to it, edit it, search it, run various
 tests on it, and so on. What you can't do is read, write, or know all of
 it. There is no internal representation that you could convert it to
 that would allow you to do these things, because you still have 10^9
 bits of information. It is a limitation of the human brain that it can't
 store more information than this.

Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.

A typical painting in the Louvre might be 1 meter on a side.  At roughly 
16 pixels per millimeter, and a perceivable color depth of about 20 bits 
that would be about 10^8 bits.  If an art specialist knew all about, 
say, 1000 paintings in the Louvre, that specialist would understand a 
total of about 10^11 bits.

You might be inclined to say that not all of those bits count, that many 
are redundant to understanding.

Exactly.

People can easily comprehend 10^9 bits.  It makes no sense to argue 
about degree of comprehension by quoting numbers of bits.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Sorry if I did not make clear the distinction between knowing the learning 
algorithm for AGI (which we can do) and knowing what was learned (which we 
can't).

My point about Google is to illustrate that distinction.  The Google database 
is about 10^14 bits.  (It keeps a copy of the searchable part of the Internet 
in RAM).  The algorithm is deterministic.  You could, in principle, model the 
Google server in a more powerful machine and use it to predict the result of a 
search.  But where does this get you?  You can't predict the result of the 
simulation any more than you could predict the result of the query you are 
simulating.  In practice the human brain has finite limits just like any other 
computer.

My point about AGI is that constructing an internal representation that allows 
debugging the learned knowledge is pointless.  A more powerful AGI could do it, 
but you can't.  You can't do any better than to manipulate the input and 
observe the output.  If you tell your robot to do something and it sits in a 
corner instead, you can't do any better than to ask it why, hope for a sensible 
answer, and retrain it.  Trying to debug the reasoning for its behavior would 
be like trying to understand why a driver made a left turn by examining the 
neural firing patterns in the driver's brain.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

Mark Waser wrote:
   Given sufficient time, anything  should be able to be understood and 
 debugged.
 Give me *one* counter-example to  the above . . . .
Matt Mahoney replied:
 Google.  You cannot predict the results of a search.  It does not help 
 that you have full access to the Internet.  It would not help even if 
 Google gave you full access to their server.

This is simply not correct.  Google uses a single non-random algorithm 
against a database to determine what results it returns.  As long as you 
don't update the database, the same query will return the exact same results 
and, with knowledge of the algorithm, looking at the database manually will 
also return the exact same results.

Full access to the Internet is a red herring.  Access to Google's database 
at the time of the query will give the exact precise answer.  This is also, 
exactly analogous to an AGI since access to the AGI's internal state will 
explain the AGI's decision (with appropriate caveats for systems that 
deliberately introduce randomness -- i.e. when the probability is 60/40, the 
AGI flips a weighted coin -- but in even those cases, the answer will still 
be of the form that the AGI ended up with a 60% probability of X and 40% 
probability of Y and the weighted coin landed on the 40% side).

 When we build AGI, we will understand it the way we understand Google. 
 We know how a search engine works.  We will understand how learning 
 works.  But we will not be able to predict or control what we build, even 
 if we poke inside.

I agree with your first three statements but again, the fourth is simply not 
correct (as well as a blatant invitation to UFAI).  Google currently 
exercises numerous forms of control over their search engine.  It is known 
that they do successfully exclude sites (for visibly trying to game 
PageRank, etc.).  They constantly tweak their algorithms to change/improve 
the behavior and results.  Note also that there is a huge difference between 
saying that something is/can be exactly controlled (or able to be exactly 
predicted without knowing it's exact internal state) and that something's 
behavior is bounded (i.e. that you can be sure that something *won't* 
happen -- like all of the air in a room suddenly deciding to occupy only 
half the room).  No complex and immense system is precisely controlled but 
many complex and immense systems are easily bounded.

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


I will try to answer several posts here.  I said that the knowledge base of 
an AGI must be opaque because it has 10^9 bits of information, which is more 
than a person can comprehend.  By opaque, I mean that you can't do any 
better by examining or modifying the internal representation than you could 
by examining or modifying the training data.  For a text based AI with 
natural language ability, the 10^9 bits of training data would be about a 
gigabyte of text, about 1000 books.  Of course you can sample it, add to it, 
edit it, search it, run various tests on it, and so on.  What you can't do 
is read, write, or know all of it.  There is no internal representation that 
you could convert it to that would allow you to do these things, because you 
still have 10^9 bits of information.  It is a limitation of the human brain

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore

Matt Mahoney wrote:

Richard Loosemore [EMAIL PROTECTED] wrote:
Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.


That is true.  Understanding n bits is the same as compressing some larger 
training set that has an algorithmic complexity of n bits.  Once you have done this, you 
can use your probability model to make predictions about unseen data generated by the 
same (unknown) Turing machine as the training data.  The closer to n you can compress, 
the better your predictions will be.

I am not sure what it means to understand a painting, but let's say that you 
understand art if you can identify the artists of paintings you haven't seen before with 
better accuracy than random guessing.  The relevant quantity of information is not the 
number of pixels and resolution, which depend on the limits of the eye, but the (much 
smaller) number of features that the high level perceptual centers of the brain are 
capable of distinguishing and storing in memory.  (Experiments by Standing and Landauer 
suggest it is a few bits per second for long term memory, the same rate as language).  
Then you guess the shortest program that generates a list of feature-artist pairs 
consistent with your knowledge of art and use it to predict artists given new features.

My estimate of 10^9 bits for a language model is based on 4 lines of evidence, 
one of which is the amount of language you process in a lifetime.  This is a 
rough estimate of course.  I estimate 1 GB (8 x 10^9 bits) compressed to 1 bpc 
(Shannon) and assume you remember a significant fraction of that.


Matt,

So long as you keep redefining understand to mean whatever something 
trivial (or at least, something different in different circumstances), 
all you do is reinforce the point I was trying to make.


In your definition of understanding in the context of art, above, you 
specifically choose an interpretation that enables you to pick a 
particular bit rate.  But if I chose a different interpretation (and I 
certainly would - an art historian would never say they understood a 
painting just because they could tell the artist's style better than a 
random guess!), I might come up with a different bit rate.  And if I 
chose a sufficiently subtle concept of understand, I would be unable 
to come up with *any* bit rate, because that concept of understand 
would not lend itself to any easy bit rate analysis.


The lesson?  Talking about bits and bit rates is completely pointless 
 which was my point.


You mainly identify the meaning of understand as a variant of the 
meaning of compress.  I completely reject this - this is the most 
idiotic development in AI research since the early attempts to do 
natural language translation using word-by-word lookup tables  -  and I 
challenge you to say why anyone could justify reducing the term in such 
an extreme way.  Why have you thrown out the real meaning of 
understand and substituted another meaning?  What have we gained by 
dumbing the concept down?


As I said in previously, this is as crazy as redefining the complex 
concept of happiness to be a warm puppy.



Richard Loosemore




Landauer, Tom (1986), “How much do people
remember?  Some estimates of the quantity
of learned information in long term memory”, Cognitive Science (10) pp. 477-493

Shannon, Cluade E. (1950), “Prediction and
Entropy of Printed English”, Bell Sys. Tech. J (3) p. 50-64.  


Standing, L. (1973), “Learning 10,000 Pictures”,
Quarterly Journal of Experimental Psychology (25) pp. 207-222.



-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:33:04 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:

I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
representation than you could by examining or modifying the training
data. For a text based AI with natural language ability, the 10^9 bits
of training data would be about a gigabyte of text, about 1000 books. Of
course you can sample it, add to it, edit it, search it, run various
tests on it, and so on. What you can't do is read, write, or know all of
it. There is no internal representation that you could convert it to
that would allow you to do these things, because you still have 10^9
bits of information. It is a limitation of the human brain that it can't
store more information than this.


Understanding 10^9 bits of information is not the same as storing 10^9 
bits of information.


A typical painting in the Louvre might be 1 meter on a side.  At roughly 
16 pixels per millimeter, and a perceivable color depth of about 20 bits 
that would be about 10^8 bits.  If an art specialist

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser
You're drifting off topic . . . .  Let me remind you of the flow of the 
conversation.

You said:
 Models that are simple enough to debug are too simple to scale. 
 The contents of a knowledge base for AGI will be beyond our ability to 
 comprehend.

I said:
 Given sufficient time, anything should be able to be understood and 
 debugged.
 Give me *one* counter-example to the above . . . . 

You said:
 Google.  You cannot predict the results of a search. 
and
 It would not help even if Google gave you full access to their server.

I said:
This is simply not correct.  Google uses a single non-random algorithm 
against a database to determine what results it returns.  As long as you 
don't update the database, the same query will return the exact same results 
and, with knowledge of the algorithm, looking at the database manually will 
also return the exact same results.

You are now changing the argument from your quote You cannot predict the 
results of a search ... even if Google gave you full access to their server to 
now say that you can't know what was learned (which I also believe is incorrect 
but will debate in the next e-mail).

Are you conceding that you can predict the results of a Google search?
Are you now conceding that it is not true that Models that are simple enough 
to debug are too simple to scale.?
And, if the former but not the latter, would you care to attempt to offer 
another counter-example or would you prefer to retract your initial statements?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Sorry if I did not make clear the distinction between knowing the learning 
algorithm for AGI (which we can do) and knowing what was learned (which we 
can't).

My point about Google is to illustrate that distinction.  The Google database 
is about 10^14 bits.  (It keeps a copy of the searchable part of the Internet 
in RAM).  The algorithm is deterministic.  You could, in principle, model the 
Google server in a more powerful machine and use it to predict the result of a 
search.  But where does this get you?  You can't predict the result of the 
simulation any more than you could predict the result of the query you are 
simulating.  In practice the human brain has finite limits just like any other 
computer.

My point about AGI is that constructing an internal representation that allows 
debugging the learned knowledge is pointless.  A more powerful AGI could do it, 
but you can't.  You can't do any better than to manipulate the input and 
observe the output.  If you tell your robot to do something and it sits in a 
corner instead, you can't do any better than to ask it why, hope for a sensible 
answer, and retrain it.  Trying to debug the reasoning for its behavior would 
be like trying to understand why a driver made a left turn by examining the 
neural firing patterns in the driver's brain.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

Mark Waser wrote:
   Given sufficient time, anything  should be able to be understood and 
 debugged.
 Give me *one* counter-example to  the above . . . .
Matt Mahoney replied:
 Google.  You cannot predict the results of a search.  It does not help 
 that you have full access to the Internet.  It would not help even if 
 Google gave you full access to their server.

This is simply not correct.  Google uses a single non-random algorithm 
against a database to determine what results it returns.  As long as you 
don't update the database, the same query will return the exact same results 
and, with knowledge of the algorithm, looking at the database manually will 
also return the exact same results.

Full access to the Internet is a red herring.  Access to Google's database 
at the time of the query will give the exact precise answer.  This is also, 
exactly analogous to an AGI since access to the AGI's internal state will 
explain the AGI's decision (with appropriate caveats for systems that 
deliberately introduce randomness -- i.e. when the probability is 60/40, the 
AGI flips a weighted coin -- but in even those cases, the answer will still 
be of the form that the AGI ended up with a 60% probability of X and 40% 
probability of Y and the weighted coin landed on the 40% side).

 When we build AGI, we will understand it the way we understand Google. 
 We know how a search engine works.  We will understand how learning 
 works.  But we will not be able to predict or control what we build, even 
 if we poke inside.

I agree with your first three statements but again, the fourth is simply not 
correct (as well as a blatant invitation to UFAI).  Google currently 
exercises numerous forms of control over their search engine

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Richard, what is your definition of understanding?  How would you test 
whether a person understands art?

Turing offered a behavioral test for intelligence.  My understanding of 
understanding is that it is something that requires intelligence.  The 
connection between intelligence and compression is not obvious.  I have 
summarized the arguments here.
http://cs.fit.edu/~mmahoney/compression/rationale.html
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:38:49 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard Loosemore [EMAIL PROTECTED] wrote:
 Understanding 10^9 bits of information is not the same as storing 10^9 
 bits of information.
 
 That is true.  Understanding n bits is the same as compressing some larger 
 training set that has an algorithmic complexity of n bits.  Once you have 
 done this, you can use your probability model to make predictions about 
 unseen data generated by the same (unknown) Turing machine as the training 
 data.  The closer to n you can compress, the better your predictions will be.
 
 I am not sure what it means to understand a painting, but let's say that 
 you understand art if you can identify the artists of paintings you haven't 
 seen before with better accuracy than random guessing.  The relevant quantity 
 of information is not the number of pixels and resolution, which depend on 
 the limits of the eye, but the (much smaller) number of features that the 
 high level perceptual centers of the brain are capable of distinguishing and 
 storing in memory.  (Experiments by Standing and Landauer suggest it is a few 
 bits per second for long term memory, the same rate as language).  Then you 
 guess the shortest program that generates a list of feature-artist pairs 
 consistent with your knowledge of art and use it to predict artists given new 
 features.
 
 My estimate of 10^9 bits for a language model is based on 4 lines of 
 evidence, one of which is the amount of language you process in a lifetime.  
 This is a rough estimate of course.  I estimate 1 GB (8 x 10^9 bits) 
 compressed to 1 bpc (Shannon) and assume you remember a significant fraction 
 of that.

Matt,

So long as you keep redefining understand to mean whatever something 
trivial (or at least, something different in different circumstances), 
all you do is reinforce the point I was trying to make.

In your definition of understanding in the context of art, above, you 
specifically choose an interpretation that enables you to pick a 
particular bit rate.  But if I chose a different interpretation (and I 
certainly would - an art historian would never say they understood a 
painting just because they could tell the artist's style better than a 
random guess!), I might come up with a different bit rate.  And if I 
chose a sufficiently subtle concept of understand, I would be unable 
to come up with *any* bit rate, because that concept of understand 
would not lend itself to any easy bit rate analysis.

The lesson?  Talking about bits and bit rates is completely pointless 
 which was my point.

You mainly identify the meaning of understand as a variant of the 
meaning of compress.  I completely reject this - this is the most 
idiotic development in AI research since the early attempts to do 
natural language translation using word-by-word lookup tables  -  and I 
challenge you to say why anyone could justify reducing the term in such 
an extreme way.  Why have you thrown out the real meaning of 
understand and substituted another meaning?  What have we gained by 
dumbing the concept down?

As I said in previously, this is as crazy as redefining the complex 
concept of happiness to be a warm puppy.


Richard Loosemore



 Landauer, Tom (1986), “How much do people
 remember?  Some estimates of the quantity
 of learned information in long term memory”, Cognitive Science (10) pp. 
 477-493
 
 Shannon, Cluade E. (1950), “Prediction and
 Entropy of Printed English”, Bell Sys. Tech. J (3) p. 50-64.  
 
 Standing, L. (1973), “Learning 10,000 Pictures”,
 Quarterly Journal of Experimental Psychology (25) pp. 207-222.
 
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 - Original Message 
 From: Richard Loosemore [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Wednesday, November 15, 2006 9:33:04 AM
 Subject: Re: [agi] A question on the symbol-system hypothesis
 
 Matt Mahoney wrote:
 I will try to answer several posts here. I said that the knowledge
 base of an AGI must be opaque because it has 10^9 bits of information,
 which is more than a person can comprehend. By opaque, I mean that you
 can't do any better by examining or modifying the internal
 representation than you could by examining or modifying the training
 data. For a text based AI with natural language ability, the 10^9 bits
 of training data would be about a gigabyte of text, about 1000

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser

It keeps a copy of the searchable part of the Internet in RAM


Sometimes I wonder why I argue with you when you throw around statements 
like this that are this massively incorrect.  Would you care to retract 
this?


You could, in principle, model the Google server in a more powerful 
machine and use it to predict the result of a search


What is this model the Google server BS?  Google search results are a 
*rat-simple* database query.  Building the database involves a much more 
sophisticated algorithm but it's results are *entirely* predictable if you 
know the order of the sites that are going to be imported.  There is *NO* 
mystery or magic here.  It is all eminently debuggable if you know the 
initial conditions.


My point about AGI is that constructing an internal representation that 
allows debugging the learned knowledge is pointless.


Huh? This is absolutely ridiculous.  If the learned knowledge can't be 
debugged (either by you or by the AGI) then it's going to be *a lot* more 
difficult to unlearn/correct incorrect knowledge.  How can that possibly be 
pointless?  Not to mention the fact that teaching knowledge to others is 
much easier . . . .



A more powerful AGI could do it, but you can't.


Why can't I -- particularly if I were given infinite time (or even a 
moderately decent set of tools)?


You can't do any better than to manipulate the input and observe the 
output.


This is absolute and total BS and last two sentences in your e-mail (If you 
tell your robot to do something and it sits in a corner instead, you can't 
do any better than to ask it why, hope for a sensible answer, and retrain 
it.  Trying to debug the reasoning for its behavior would be like trying to 
understand why a driver made a left turn by examining the neural firing 
patterns in the driver's brain.) are even worse.  The human brain *is* 
relatively opaque in it's operation but there is no good reason that I know 
of why this is advantageous and *many* reasons why it is disadvantageous --  
and I know of no reasons why opacity is required for intelligence.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:24 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Sorry if I did not make clear the distinction between knowing the learning 
algorithm for AGI (which we can do) and knowing what was learned (which we 
can't).


My point about Google is to illustrate that distinction.  The Google 
database is about 10^14 bits.  (It keeps a copy of the searchable part of 
the Internet in RAM).  The algorithm is deterministic.  You could, in 
principle, model the Google server in a more powerful machine and use it to 
predict the result of a search.  But where does this get you?  You can't 
predict the result of the simulation any more than you could predict the 
result of the query you are simulating.  In practice the human brain has 
finite limits just like any other computer.


My point about AGI is that constructing an internal representation that 
allows debugging the learned knowledge is pointless.  A more powerful AGI 
could do it, but you can't.  You can't do any better than to manipulate the 
input and observe the output.  If you tell your robot to do something and it 
sits in a corner instead, you can't do any better than to ask it why, hope 
for a sensible answer, and retrain it.  Trying to debug the reasoning for 
its behavior would be like trying to understand why a driver made a left 
turn by examining the neural firing patterns in the driver's brain.


-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis

Mark Waser wrote:

  Given sufficient time, anything  should be able to be understood and
debugged.
Give me *one* counter-example to  the above . . . .

Matt Mahoney replied:

Google.  You cannot predict the results of a search.  It does not help
that you have full access to the Internet.  It would not help even if
Google gave you full access to their server.


This is simply not correct.  Google uses a single non-random algorithm
against a database to determine what results it returns.  As long as you
don't update the database, the same query will return the exact same results
and, with knowledge of the algorithm, looking at the database manually will
also return the exact same results.

Full access to the Internet is a red herring.  Access to Google's database
at the time of the query will give the exact precise answer.  This is also,
exactly analogous to an AGI since access to the AGI's internal state will
explain the AGI's decision (with appropriate caveats for systems that
deliberately introduce randomness -- i.e. when the probability is 60/40, the
AGI flips a weighted coin -- but in even those cases, the answer will still

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Mark Waser

The connection between intelligence and compression is not obvious.


The connection between intelligence and compression *is* obvious -- but 
compression, particularly lossless compression, is clearly *NOT* 
intelligence.


Intelligence compresses knowledge to ever simpler rules because that is an 
effective way of dealing with the world.  Discarding ineffective/unnecessary 
knowledge to make way for more effective/necessary knowledge is an effective 
way of dealing with the world.  Blindly maintaining *all* knowledge at 
tremendous costs is *not* an effective way of dealing with the world (i.e. 
it is *not* intelligent).


1. What Hutter proved is that the optimal behavior of an agent is to guess 
that the environment is controlled by the shortest program that is 
consistent with all of the interaction observed so far.  The problem of 
finding this program known as AIXI.
2. The general problem is not computable [11], although Hutter proved 
that if we assume time bounds t and space bounds l on the environment, 
then this restricted problem, known as AIXItl, can be solved in O(t2l) 
time


Very nice -- except that O(t2l) time is basically equivalent to incomputable 
for any real scenario.  Hutter's proof is useless because it relies upon the 
assumption that you have adequate resources (i.e. time) to calculate AIXI --  
which you *clearly* do not.  And like any other proof, once you invalidate 
the assumptions, the proof becomes equally invalid.  Except as an 
interesting but unobtainable edge case, why do you believe that Hutter has 
any relevance at all?



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Richard, what is your definition of understanding?  How would you test 
whether a person understands art?


Turing offered a behavioral test for intelligence.  My understanding of 
understanding is that it is something that requires intelligence.  The 
connection between intelligence and compression is not obvious.  I have 
summarized the arguments here.

http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:38:49 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:

Richard Loosemore [EMAIL PROTECTED] wrote:

Understanding 10^9 bits of information is not the same as storing 10^9
bits of information.


That is true.  Understanding n bits is the same as compressing some 
larger training set that has an algorithmic complexity of n bits.  Once 
you have done this, you can use your probability model to make predictions 
about unseen data generated by the same (unknown) Turing machine as the 
training data.  The closer to n you can compress, the better your 
predictions will be.


I am not sure what it means to understand a painting, but let's say that 
you understand art if you can identify the artists of paintings you 
haven't seen before with better accuracy than random guessing.  The 
relevant quantity of information is not the number of pixels and 
resolution, which depend on the limits of the eye, but the (much smaller) 
number of features that the high level perceptual centers of the brain are 
capable of distinguishing and storing in memory.  (Experiments by Standing 
and Landauer suggest it is a few bits per second for long term memory, the 
same rate as language).  Then you guess the shortest program that 
generates a list of feature-artist pairs consistent with your knowledge of 
art and use it to predict artists given new features.


My estimate of 10^9 bits for a language model is based on 4 lines of 
evidence, one of which is the amount of language you process in a 
lifetime.  This is a rough estimate of course.  I estimate 1 GB (8 x 10^9 
bits) compressed to 1 bpc (Shannon) and assume you remember a significant 
fraction of that.


Matt,

So long as you keep redefining understand to mean whatever something
trivial (or at least, something different in different circumstances),
all you do is reinforce the point I was trying to make.

In your definition of understanding in the context of art, above, you
specifically choose an interpretation that enables you to pick a
particular bit rate.  But if I chose a different interpretation (and I
certainly would - an art historian would never say they understood a
painting just because they could tell the artist's style better than a
random guess!), I might come up with a different bit rate.  And if I
chose a sufficiently subtle concept of understand, I would be unable
to come up with *any* bit rate, because that concept of understand
would not lend itself to any easy bit rate analysis.

The lesson?  Talking about bits and bit rates is completely pointless
 which was my point.

You mainly identify

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Richard Loosemore

Matt Mahoney wrote:

Richard, what is your definition of understanding?  How would you test 
whether a person understands art?

Turing offered a behavioral test for intelligence.  My understanding of 
understanding is that it is something that requires intelligence.  The 
connection between intelligence and compression is not obvious.  I have summarized the 
arguments here.
http://cs.fit.edu/~mmahoney/compression/rationale.html


1) There will probably never be a compact definition of understanding. 
 Nevertheless, it is possible for us (being understanding systems) to 
know some of its features.  I could produce a shopping list of typical 
features of understanding, but that would not be the same as a 
definition, so I will not.  See my paper in the forthcoming proceedings 
of the 2006 AGIRI workshop, for arguments.  (I will make a version of 
this available this week, after final revisions).


3) One tiny, almost-too-obvious-to-be-worth-stating fact about 
understanding is that it compresses information in order to do its job.


4) To mistake this tiny little facet of understanding for the whole is 
to say that a hurricane IS rotation, rather than that rotation is a 
facet of what a hurricane is.


5) I have looked at your paper and my feelings are exactly the same as 
Mark's  theorems developed on erroneous assumptions are worthless.




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney

Mark Waser wrote:
Are you conceding that you can predict the results of a Google 
search?


OK, you are right.  You can type the same query twice.  Or if you live long 
enough you can do it the hard way.  But you won't.

Are you now conceding that it is not true that Models that are simple  enough 
to debug are too simple to scale.?


OK, you are right again.  Plain text is a simple way to represent knowledge.  I 
can search and edit terabytes of it.

But this is not the point I wanted to make.  I am sure I expressed it badly.  
The point is there are two parts to AGI, a learning algorithm and a knowledge 
base.  The learning algorithm has low complexity.  You can debug it, meaning 
you can examine the internals to test it and verify it is working the way you 
want.  The knowledge base has high complexity.  You can't debug it.  You can 
examine it and edit it but you can't verify its correctness.

An AGI with a correct learning algorithm might still behave badly.  You can't 
examine the knowledge base to find out why.  You can't manipulate the knowledge 
base data to fix it.  At least you can't do these things any better than 
manipulating the inputs and observing the outputs.  The reason is that the 
knowledge base is too complex.  In theory you could do these things if you 
lived long enough, but you won't.  For practical purposes, the AGI knowledge 
base is a black box.  You need to design your goals, learning algorithm, data 
set and test program with this in mind.  Trying to build transparency into the 
data structure would be pointless.  Information theory forbids it.  Opacity is 
not advantagous or desirable.  It is just unavoidable.

I am sure I won't convince you, so maybe you have a different explanation why 
50 years of building structured knowledge bases has not worked, and what you 
think can be done about it?

And Google DOES keep the searchable part of the Internet in memory
http://blog.topix.net/archives/11.html

because they have enough hardware to do it.
http://en.wikipedia.org/wiki/Supercomputer#Quasi-supercomputing

-- Matt Mahoney, [EMAIL PROTECTED]





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
1. The fact that AIXI^tl is intractable is not relevant to the proof that 
compression = intelligence, any more than the fact that AIXI is not computable. 
 In fact it is supporting because it says that both are hard problems, in 
agreement with observation.

2. Do not confuse the two compressions.  AIXI proves that the optimal behavior 
of a goal seeking agent is to guess the shortest program consistent with its 
interaction with the environment so far.  This is lossless compression.  A 
typical implementation is to perform some pattern recognition on the inputs to 
identify features that are useful for prediction.  We sometimes call this 
lossy compression because we are discarding irrelevant data.  If we 
anthropomorphise the agent, then we say that we are replacing the input with 
perceptually indistinguishable data, which is what we typically do when we 
compress video or sound.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

 The connection between intelligence and compression is not obvious.

The connection between intelligence and compression *is* obvious -- but 
compression, particularly lossless compression, is clearly *NOT* 
intelligence.

Intelligence compresses knowledge to ever simpler rules because that is an 
effective way of dealing with the world.  Discarding ineffective/unnecessary 
knowledge to make way for more effective/necessary knowledge is an effective 
way of dealing with the world.  Blindly maintaining *all* knowledge at 
tremendous costs is *not* an effective way of dealing with the world (i.e. 
it is *not* intelligent).

1. What Hutter proved is that the optimal behavior of an agent is to guess 
that the environment is controlled by the shortest program that is 
consistent with all of the interaction observed so far.  The problem of 
finding this program known as AIXI.
 2. The general problem is not computable [11], although Hutter proved 
 that if we assume time bounds t and space bounds l on the environment, 
 then this restricted problem, known as AIXItl, can be solved in O(t2l) 
 time

Very nice -- except that O(t2l) time is basically equivalent to incomputable 
for any real scenario.  Hutter's proof is useless because it relies upon the 
assumption that you have adequate resources (i.e. time) to calculate AIXI --  
which you *clearly* do not.  And like any other proof, once you invalidate 
the assumptions, the proof becomes equally invalid.  Except as an 
interesting but unobtainable edge case, why do you believe that Hutter has 
any relevance at all?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


Richard, what is your definition of understanding?  How would you test 
whether a person understands art?

Turing offered a behavioral test for intelligence.  My understanding of 
understanding is that it is something that requires intelligence.  The 
connection between intelligence and compression is not obvious.  I have 
summarized the arguments here.
http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 2:38:49 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard Loosemore [EMAIL PROTECTED] wrote:
 Understanding 10^9 bits of information is not the same as storing 10^9
 bits of information.

 That is true.  Understanding n bits is the same as compressing some 
 larger training set that has an algorithmic complexity of n bits.  Once 
 you have done this, you can use your probability model to make predictions 
 about unseen data generated by the same (unknown) Turing machine as the 
 training data.  The closer to n you can compress, the better your 
 predictions will be.

 I am not sure what it means to understand a painting, but let's say that 
 you understand art if you can identify the artists of paintings you 
 haven't seen before with better accuracy than random guessing.  The 
 relevant quantity of information is not the number of pixels and 
 resolution, which depend on the limits of the eye, but the (much smaller) 
 number of features that the high level perceptual centers of the brain are 
 capable of distinguishing and storing in memory.  (Experiments by Standing 
 and Landauer suggest it is a few bits per second for long term memory, the 
 same rate as language).  Then you guess the shortest program that 
 generates a list of feature-artist pairs consistent with your knowledge of 
 art and use it to predict artists given new features.

 My estimate of 10^9 bits for a language model is based on 4 lines of 
 evidence, one of which

Re: [agi] A question on the symbol-system hypothesis

2006-11-15 Thread Matt Mahoney
Richard Loosemore [EMAIL PROTECTED] wrote:
 5) I have looked at your paper and my feelings are exactly the same as 
 Mark's  theorems developed on erroneous assumptions are worthless.

Which assumptions are erroneous?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 4:09:23 PM
Subject: Re: [agi] A question on the symbol-system hypothesis

Matt Mahoney wrote:
 Richard, what is your definition of understanding?  How would you test 
 whether a person understands art?
 
 Turing offered a behavioral test for intelligence.  My understanding of 
 understanding is that it is something that requires intelligence.  The 
 connection between intelligence and compression is not obvious.  I have 
 summarized the arguments here.
 http://cs.fit.edu/~mmahoney/compression/rationale.html

1) There will probably never be a compact definition of understanding. 
  Nevertheless, it is possible for us (being understanding systems) to 
know some of its features.  I could produce a shopping list of typical 
features of understanding, but that would not be the same as a 
definition, so I will not.  See my paper in the forthcoming proceedings 
of the 2006 AGIRI workshop, for arguments.  (I will make a version of 
this available this week, after final revisions).

3) One tiny, almost-too-obvious-to-be-worth-stating fact about 
understanding is that it compresses information in order to do its job.

4) To mistake this tiny little facet of understanding for the whole is 
to say that a hurricane IS rotation, rather than that rotation is a 
facet of what a hurricane is.

5) I have looked at your paper and my feelings are exactly the same as 
Mark's  theorems developed on erroneous assumptions are worthless.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-14 Thread Mark Waser



 Models 
that are simple enough to debug are too simple to 
scale.
 The contents of a knowledge base for AGI will be beyond our 
ability to comprehend.

 Given sufficient time, anything 
should be able to be understood and debugged. Size alone does not make 
something incomprehensible and I defy you to point at *anything* that is truly 
incomprehensible to a smart human(for any reason other than we lack 
knowledge on it). I've seen all the analogies with pets not understanding 
and the beliefs that AIs are going to have minds "immeasurably greater than our 
own" and I submit that it's all just speculation on your part. My 
contention is that there is a threshold and that we are above it and that beyond 
that, it's just a matter of speed and how much you can hold in working memory at 
a time. I certainly don't buy the "mystical" approach that says that 
sufficiently large neural nets will come up with sufficiently complex 
discoveries that we can't understand them. I contend that if you can't 
explain it to a very smart human (given sufficient time), then you don't 
understand it.

 Give me *one* counter-example to 
the above . . . . 


  - Original Message - 
  From: 
  Matt 
  Mahoney 
  To: agi@v2.listbox.com 
  Sent: Monday, November 13, 2006 10:22 
  PM
  Subject: Re: Re: [agi] A question on the 
  symbol-system hypothesis
  
  
  
  James 
  Ratcliff [EMAIL PROTECTED] 
  wrote:Well, words and language based ideas/terms adequatly describe 
  much of the upper levels of human interaction and see appropriate in that 
  case.It fails of course when it devolpes down to the physical 
  level, ie vision or motor cortex skills, but other than that, using 
  language internaly would seem natural, and be much easier to look 
  inside the box ,and see what is going on and correct thesystem's 
  behaviour.No, no, no, that is why AI failed. You can't look 
  inside the box because it's 10^9 bits. Models that are simple enough to 
  debug are too simple to scale. How many times will we repeat this 
  mistake? The contents of a knowledge base for AGI will be beyond our 
  ability to comprehend. Get over it. It will require a different 
  approach.1. Develop a quantifiable criteria for success, a test 
  score.2. Develop a theory of learning.3. Develop a training and test 
  set (about 10^9 bits compressed).4. Tune the learning model to improve the 
  score.Example:1. Criteria: SAT analogy test score.2. 
  Theory: word associtation matrix reduced by singular value decomposition 
  (SVD).3. Data: 50M word corpus of news articles.4. Results: http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-48255.pdfAn 
  SVD factored word association matrix seems pretty opaque to me. You 
  can't point to which matrix elements represent associations like cat-dog, 
  moon-star, etc, nor will you be inserting such knowledge for testing. If 
  you want to understand it, you have to look at the learning algorithm. 
  It turns out that there is an efficient neural model for SVD. http://gen.gorrellville.com/gorrell06.pdfIt 
  should not take decades to develop a knowledge base like Cyc. 
  Statistical approaches can do this in a matter of minutes or hours.
  -- Matt Mahoney, 
  [EMAIL PROTECTED]
  
  This list is sponsored by AGIRI: http://www.agiri.org/email To 
  unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303 

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



  1   2   >