RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser
Anyone who reads this thread will know who was being honest and 
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the "if" clause, and which
the "then" clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessarily relevant to
forward and backward chaining.

As far as I can tell, Richard is trying to gently correct you and you are 
both incorrect and unwilling to even attempt to interpret his words in the 
way he meant (i.e. an honest and reasonable fashion).


- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 8:58 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Richard,

You just keep digging yourself in deeper.

Look at the original email in which you said "This is not correct."  The
only quoted text that precedes it is quoted from me.  So why are you saying
"Jim's statement was a misunderstanding"?

Furthermore, I think your criticisms of my statements are generally
unfounded.

My choice of the word "reasoning" was not "not correct", as you imply, since
the Wikipedia definition says "Forward chaining is one of the two main
methods of REASONING when using inference rules." (Emphasis added.)

My statement made it clear I was describing the forward direction as being
from the if clause to the then clause, which matches the Wikipedia
definition, so what is "not correct" about that.

In addition, you said my statement that in the absence of a temporal
criteria "the notion of what is forward and backward chaining might be
somewhat arbitrary"  was a "completely incorrect conclusion."

Offensively strong language, considering it is unfounded.

It is unfounded because in the absence of a temporal distinction, many
if-then rules, particularly if they are probabilistic, can viewed in a two
way form, with a probabilistic inference going both ways.  In this case it
becomes unclear which side is the "if" clause, and which the "then" clause,
and, thus, unclear which way is forward and which backward by the definition
contained in Wikipedia --- unless there is a temporal criteria.  This issue
becomes even more problematic when dealing with patterns based on temporal
simultaneity, as in much of object recognition, in which even a temporal
distinction, does not distinguish between what should be consider the if
clause and what should be considered the then clause.

Enough of arguing about arguing.  You can have the last say if you want.  I
want to spend what time I have to spend on this list conversing with people
who are more concerned about truth than trying to sound like they know more
than others, particularly when they don't.

Anyone who reads this thread will know who was being honest and reasonable
and who was not.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Sunday, July 13, 2008 7:52 PM
To: agi@v2.listbox.com
Subject: Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed Porter wrote:

Richard,

I think Wikipedia's definition of forward chaining (copied below) agrees
with my stated understanding as to what forward chaining means, i.e.,
reasoning from the "if" (i.e., conditions) to the "then" (i.e.,
consequences) in if-then statements.

So, once again there is an indication you have unfairly criticized the
statements of another.


But  ... nothing in what I said contradicted the wikipedia
definition of forward chaining.

Jim's statement was a misunderstanding of the meaning of forward and
backward chaining because he oversimplified the two ("forward reasoning
is reasoning from conditions to consequences, and backward reasoning is
the opposite" ... this is kind of true, if you stretch the word
"reasoining" a little, but it misses the point), and then he went from
this oversimplification to come to a completely incorrect conclusion
("...Thus I think the notion of what is forward and backward chaining
might be somewhat arbitrary...").

This last conclusion was sufficiently inaccurate that I decided to point
that out.  It was not a criticism, just a clarification;  a pointer in
the right direction.


Richard Loosemore







Ed Porter

==Wikipedia defines forward chaining as: ==

Forward chaining is one of the two main methods of reasoning when using
inference rules (in artificial intelligence). The other is backward
chaining.

Forward chaining starts with the available data and uses inference rules

to

extract more data (from an end user for example) until an optimal goal is
reached. An inference engine using forward chaining searches the inference
rules until it finds one where the antecedent (If clause) is known to 

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that 
it is green.  There is nothing temporal involved.

   - OR -
   Given an additional statement that it is green, backward-chaining says 
that it MAY croak.  Again, nothing temporal involved.


   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on this list, it would make you
seem more like someone who cared about truth and reason, and less like
someone who cared more about petty squabbles and personal ego, if you gave
reasons for your criticisms, and if you took the time to ensure your
criticism actually addressed what you are criticizing.

In your post immediately below you did neither.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 9:19 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?


Anyone who reads this thread will know who was being honest and
reasonable

and who was not.

The question is not honest and reasonable but factually correct . . . .

The following statement of yours

 In this case it becomes unclear which side is the "if" clause, and which
the "then" clause, and, thus, unclear which way is forward and which
backward by the definition contained in Wikipedia --- unless there is a
temporal criteria.

is simply incorrect.  Temporal criteria are *NOT* necessa

RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-14 Thread Mark Waser
Still fails to deal with what I was discussing.  I will leave it up to 
you

to figure out why.


Last refuge when you realize you're wrong, huh?

I ask a *very* clear question in an attempt to move forward (i.e. How do you 
see temporal criteria as being related to my example?) and I get this "You 
have to guess what I'm thinking" answer.


How can you justify ranting on and on about Richard not being "honest and 
reasonable" when you won't even answer a simple, clear question?




- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 1:43 PM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND 
BY "THE BINDING PROBLEM"?



Mark,

Still fails to deal with what I was discussing.  I will leave it up to you
to figure out why.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Monday, July 14, 2008 10:54 AM
To: agi@v2.listbox.com
Subject: RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE
BINDING PROBLEM"?

Ed,

   Take the statements

   IF it croaks, THEN it is a frog.
   IF it is a frog, THEN it is green.

   Given an additional statement that it croaks, forward-chaining says that

it is green.  There is nothing temporal involved.
   - OR -
   Given an additional statement that it is green, backward-chaining says
that it MAY croak.  Again, nothing temporal involved.

   How do you see temporal criteria as being related to my example?

   Mark

- Original Message - 
From: "Ed Porter" <[EMAIL PROTECTED]>

To: 
Sent: Monday, July 14, 2008 10:40 AM
Subject: **SPAM** RE: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND

BY "THE BINDING PROBLEM"?


Mark,

Since your attack on my statement below is based on nothing but conclusory
statements and contains neither reasoning or evidence to support them, there
is little in your below email to respond to other than your personal spleen.


You have said my statement which your email quotes is "simply incorrect"
without giving any justification.

Your statement that "Temporal criteria are *NOT* relevant to forward and
backward chaining" is itself a conclusory statement.

Furthermore this statement about temporal criteria not being relevant is
more incorrect than correct.  If an if-then rule describes a situation where
one thing causes another, or comes before it time, the thing that comes
first is more commonly the if clause (although one can write the rule in the
reverse order).  The if clause is commonly called a condition, and the then
clause is sometimes called the consequence, implying a causal or temporal
relationship.  The notion of reasoning backward from a goal being backward
chaining, normally involves the notion of reasoning back in imagined time
from a desired goal state.  So often TEMPORAL CRITERIA *ARE* RELEVANT TO
WHICH DIRECTION IS FORWARD CHAINING AND WHICH IS BACKWARD.

Even if one were to make a reach, and try to justify your statement that
"Temporal criteria are *NOT* relevant to forward and backward chaining" as
being more than just conclusory by suggesting it was an implicit reference
to statements --- like that contained Richard's prior statements in this
thread or the Wikipedia quote in one of the posts below --- that the
definition of forward and backward chaining depended on whether the
reasoning was from if clause to then clause, or the reverse --- that would
still not correct the groundlessness of your criticism.

This is because the rule that forward chaining is from if clause to then
clause and the reverse for backward chaining has no applicability to
situations where the implication goes both ways and there is no clear
indication of which pattern should be the if clause and which should be the
then clause --- which is precisely the situation I was describing in the
quote from me you unfairly criticized.

Neither Richard's prior statement in this thread nor the Wikipedia
definition below define which direction is forward and which is backward in
many such situations.

In my quote which you attacked I was discussing exactly this situations when
it was not clear which part of an inference pattern should be considered the
if clause and which the then clause.  So it appears your criticism either
totally missed, or for other reasons, failed to deal with the issue I was
discussing.

Mark, in general I do not read your posts because, among other things, like
your email below, they are generally poorly reasoned and seemed more
concerned with issues of ego and personality than with learning and teaching
truthful information or insights.  I skip many of Richard's for the same
reason, but I do read some of Richard's because despite all his pompous BS
he does occasionally say something quite thoughtful and worth while.

If you care about improving your reputation on t

Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser
Wow!  The civility level on this list is really bottoming out . . . . along 
with any sort of scientific grounding.


I have to agree with both Valentina and Richard . . . . since they are 
supported by scientific results while others are merely speculating without 
basis.


Experimental (imaging) evidence shows that known words will strongly 
activate some set of neurons when heard.  Unknown words with recognizable 
parts/features will also activate some other set of neurons when heard, 
possibly allowing the individual to puzzle out the meaning even if the word 
has never been heard before.  Totally unknown words will not strongly 
activate any neurons -- except subsequently (i.e. on a delay) some set of 
HUH? neurons.


If you wish, you can consider this to be an analogue of a massively parallel 
search carried out by the subconscious but it's really just an automatic 
operation.  Recognized word == activated neurons bringing it's meaning 
forward through spreading activation.  Totally unrecognized word == no 
activated neurons which is then interpreted as I don't know this word.


Ed's response (which you praised), while a nice fanciful story that might 
work in another universe, is *not* supported by any evidence and is 
contra-indicated by a reasonable amount of experimental evidence.



- Original Message - 
From: "Brad Paulsen" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, July 29, 2008 7:33 PM
Subject: Re: [agi] How do we know we don't know?



Valentina,

Well, the "LOL" is on you.

Richard failed to add anything new to the two previous responses that each 
posited linguistic surface feature analysis as being responsible for 
generate the "feeling of not knowing" with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, however, 
apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read the 
entire thread and the new example.  I think you'll find Richard's and your 
explanation will fail to address how the new example might generate the 
"feeling of not knowing."


Cheers,

Brad

Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to realize 
it. agi algorithms should be built in a similar way, rather than 
searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of "things not known" is wildly, outrageously
impossible, for any system!  Would we really expect that the word
"ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw" is represented somewhere
as a "word that I do not know"? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a "this is a nonword" neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore

 
*agi* | Archives  
 | Modify 
 Your Subscription [Powered by 
Listbox] 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-30 Thread Mark Waser
Categorization depends upon context.  This was pretty much decided by the late 
1980s (look up Fuzzy Concepts).
  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Wednesday, July 30, 2008 4:05 PM
  Subject: Re: [agi] a fuzzy reasoning problem


One major difference here seems to be categorization of objects versus 
categorization of actions / events.

It is very easy to differentiate animals and things by a small set of 
features, but 
with actions this is a more complicated case.

sex can refer to the group of things, sexual relations, which expands 
to include many things including kissing and touching, 
or the actual act of sexual intercourse... and the sexual intercourse 
can be performed in many different ways.

We can look at an animal like a penguin and say, this is a bird, fairly 
easily, some others are harder.

___
James Ratcliff - http://falazar.com
Looking for something...

--- On Tue, 7/29/08, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:

  From: YKY (Yan King Yin) <[EMAIL PROTECTED]>
  Subject: Re: [agi] a fuzzy reasoning problem
  To: agi@v2.listbox.com
  Date: Tuesday, July 29, 2008, 11:59 PM


On 7/30/08, Benjamin Johnston <[EMAIL PROTECTED]> wrote:> The relationship 
between cybersex and sex is of a completely different> character to the 
relationship between penguins and birds.Can you define that difference in an 
abstract, general way?  I mean,what is the *qualitative* difference that makes: 
   "cybersex is a kind of sex"different from:"penguin is a kind of 
bird"?You may say:  cybersex and phone sex lacks property X that is commonto 
all other forms of sex. 
 But then, anal sex or sex with a condom donot get a female pregnant, right?  
So by a similar reasoning you mayalso exclude anal sex or sex with a condom as 
sex.It seems that you (perhaps subjectively) require "having physicalcontact" 
as a defining characteristic of sex.  But I can imaginesomeone not using that 
criterion in the definition of sex.Also relevant here is Wittgenstein's idea of 
"familyresemblance":sometimes you may not be able to list all the defining 
properties of 
aconcept.YKY---agiArchives: 
https://www.listbox.com/member/archive/303/=nowRSS Feed: 
https://www.listbox.com/member/archive/rss/303/Modify Your 
Subscription:https://www.listbox.com/member/?&Powered by Listbox: 
http://www.listbox.com 



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser
People can discriminate real words from nonwords even when the latter are 
orthographically and phonologically word-like, presumably because words 
activate specific lexical and/or semantic information.

http://cat.inist.fr/?aModele=afficheN&cpsidt=14733408

Categories like "noun" and "verb" represent the basic units of grammar in 
all human languages, and the retrieval of categorical information associated 
with words is an essential step in the production of grammatical speech. 
Studies of brain-damaged patients suggest that knowledge of nouns and verbs 
can be spared or impaired selectively; however, the neuroanatomical 
correlates of this dissociation are not well understood. We used 
event-related functional MRI to identify cortical regions that were active 
when English-speaking subjects produced nouns or verbs in the context of 
short phrases

http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1360518

Neuroimaging and lesion studies suggest that processing of word classes, 
such as verbs and nouns, is associated with distinct neural mechanisms. Such 
studies also suggest that subcategories within these broad word class 
categories are differentially processed in the brain. Within the class of 
verbs, argument structure provides one linguistic dimension that 
distinguishes among verb exemplars, with some requiring more complex 
argument structure entries than others. This study examined the neural 
instantiation of verbs by argument structure complexity: one-, two-, and 
three-argument verbs.

http://portal.acm.org/citation.cfm?id=1321140.1321142&coll=&dl=

The neural basis for verb comprehension has proven elusive, in part because 
of the limited range of verb categories that have been assessed. In the 
present study, 16 healthy young adults were probed for the meaning 
associated with verbs of MOTION and verbs of COGNITION. We observed distinct 
patterns of activation for each verb subcategory: MOTION verbs are 
associated with recruitment of left ventral temporal-occipital cortex, 
bilateral prefrontal cortex and caudate, whereas COGNITION verbs are 
associated with left posterolateral temporal activation. These findings are 
consistent with the claim that the neural representations of verb 
subcategories are distinct

http://cat.inist.fr/?aModele=afficheN&cpsidt=13451551

Neural processing of nouns and verbs: the role of inflectional morphology

http://csl.psychol.cam.ac.uk/publications/04_Tyler_Neuropsychologia.pdf


Others:

http://cercor.oxfordjournals.org/cgi/content/abstract/12/9/900
http://www3.interscience.wiley.com/journal/99520773/abstract?CRETRY=1&SRETRY=0
http://www.jneurosci.org/cgi/content/abstract/22/7/2936


- Original Message - 
From: "Jim Bromer" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, July 30, 2008 12:15 PM
Subject: Re: [agi] How do we know we don't know?



On Wed, Jul 30, 2008 at 9:50 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
Wow!  The civility level on this list is really bottoming out . . . . 
along

with any sort of scientific grounding.



Experimental (imaging) evidence shows that known words will strongly
activate some set of neurons when heard.  Unknown words with recognizable
parts/features will also activate some other set of neurons when heard,
possibly allowing the individual to puzzle out the meaning even if the 
word

has never been heard before.  Totally unknown words will not strongly
activate any neurons -- except subsequently (i.e. on a delay) some set of
HUH? neurons.


Well, your imaging evidence is part imaging and part imagining since
no one knows what the imaging is actually showing.  I think it is
commonly believed that the imaging techniques show blood flow into
areas of the brain, and this is (reasonably in my view) taken as
evidence of neural activity. Ok, but what kind of thinking is actually
going on and how extensive are the links that don't have enough wow
factor for repeatable experiments researchers to issue as a press
release.  So if you are going to claim that you're speculations are
superiorly grounded,  I would like to see some research that shows
that unknown words will not strongly activate any neurons.  Take your
time, I am only asking a question, not challenging you to fantasy
combat.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Mark Waser

Brad,

   Go back and look at Richard's e-mail again.  His statement that "Keeping 
lists of 'things not known' is wildly, outrageously  impossible, for any 
system" *WAS* supported by a brief but very clear "evidence-based" *and* 
"well-reasoned" argument that should have made it's truth *very* obvious to 
someone with sufficient background.


   Just because you don't understand why something is true doesn't change 
it from a fact to an opinion.  Richard is generally very good in clearly and 
accurately distinguishing between what is a generally-accepted fact and what 
is his guestimate or opinion is his e-mails.



- Original Message - 
From: "Brad Paulsen" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, July 30, 2008 4:14 PM
Subject: Re: [agi] How do we know we don't know?



Richard,

Someone who can throw comments like "Isn't this a bit of a no-brainer?" 
and "Keeping lists of 'things not known' is wildly, outrageously 
impossible, for any system!" at people should expect a little bit of 
annoyance in return.  If you can't take it, don't dish it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed 
therein. Your initial reply correctly identified an additional mechanism 
that two other list members had previously reported (that surface features 
could raise the "feeling of not knowing" without triggering an exhaustive 
memory search).  As I pointed out in my response to them, this observation 
was "a good catch" but did not, in any way, show my ideas to be 
"no-brainers" or "wildly, outrageously impossible."  In that reply, I 
posted a new example query that contained only common American English 
words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why you 
believe my ideas are meritless, then let's have it.  Pejorative 
adjectives, ad hominem attacks and baseless opinions don't impress me 
much.


As to your cheerleader, she's just made my kill-list.  The only thing 
worse than someone who slings unsupported opinions around like they're 
facts, is someone who slings someone else's unsupported opinions around 
like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the "LOL" is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the "feeling of not knowing" with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, however, 
apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the "feeling of not knowing."


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more "annoyed-sounding" than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on the 
fact that it is relatively trivial to build mechanisms that monitor the 
rate at which the system is progressing in its attempt to do a 
recognition operation, and then call it as a "not known" if the progress 
rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to 
realize it. agi algorithms should be built in a similar way, rather 
than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need 
to

keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of "things not known" is wildly, outrageously
impossible, for any system!  Would we really expect that the word

"

RE: [agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-08-01 Thread Mark Waser
I would like to second the thank you.  You posted a lot more than I expected 
and I really appreciate it (and intend to show it by thoroughly reading all of 
it and absorbing it before commenting).

Mark
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Friday, August 01, 2008 3:41 PM
  Subject: **SPAM** RE: [agi] OpenCog Prime wikibook and roadmap posted 
(moderately detailed design for an OpenCog-based thinking machine)


  Ben,
   
  Thanks for the large amount of work that must have gone into the production 
of the wikibook.  Along with the upcoming PLN book (now scheduled for Sept 26 
according to Amazon) and re-reading The Hidden Pattern, there should be enough 
material for a diligent student to grok your approach.
   
  I think it will take some considerable time for anybody to absorb it all, so 
don't be too discouraged if there isn't a lot of visible banter about issues 
you think are important; we all come at the Big Questions of AGI from our own 
peculiar perspectives.  Even those of us who "want to believe" may have 
difficulty finding sufficient common ground in viewpoints to really understand 
your ideas in depth, at least for a while.
   
  If there's one thing I'd like to see more of sometime soon, it would be more 
detail on the early stages of your vision of a roadmap, to help focus both 
analysis and development.
   
  Great stuff!

   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] EVIDENCE RICHARD DOES NOT UNDERSTAND COMPLEX SYSTEM ISSUES THAT WELL

2008-08-02 Thread Mark Waser
>> I have never received any comparable emails regarding Ed Porter.

I have posted such in the past on the list and had seriously been considering 
doing so again (and your e-mail inspired me to do so).  Ed is abusive, plain 
and simple.  There was no reason for this last thread that he started except to 
shout down Richard's criticisms.  Personally, I have given up on posting 
content to this list.  Some moderation is strongly suggested.  If it includes 
banning me -- so be it.

Mark
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, August 02, 2008 10:29 PM
  Subject: **SPAM** Re: [agi] EVIDENCE RICHARD DOES NOT UNDERSTAND COMPLEX 
SYSTEM ISSUES THAT WELL



  Richard,

  FYI, I find that I agree in essence with nearly all of Ed Porter's assertions 
on scientific and technical issues, although I sometimes think he overstates 
things or words things in an inexact way.

  Also, I note that over the last couple years I have received a number (maybe 
5-10) of emails from various individuals suggesting that you be banned from 
this email list for general unproductive "trolling" behavior.  I have never 
received any comparable emails regarding Ed Porter.

  Anyway I don't personally have much patience for these overheated email 
battles, though I accept that they're part of the culture of email lists.

  I think your views are largely plausible and respectable, though I find you 
also tend to overstate things, and you sometimes implicitly redefine common 
terms in uncommon ways which I find frustrating.  However, I find it irritating 
that you diss other people (like me!) so severely for guiding their research 
based on their own scientific intuition, yet display such a dramatic level of 
confidence (IMO overconfidence) in your own scientific intuition.  AGI is a 
frontier area where as yet little is solidly known, so knowledgeable and 
intelligent experts can be expected to have different intuitions.  You seem 
distressingly unwilling to "agree to disagree", instead recurrently expressing 
negative emotion toward those whose not-fully-substantiated intuitions differ 
too much from your own not-fully-substantiated intuitions.  It's boring, even 
more than it's frustrating.  And I'm feeling like a butthead for wasting my 
time writing this email ;-p

  ben


  On Sat, Aug 2, 2008 at 10:15 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Ed, do you not remember making this accusation once before, and asking for 
people to step forward to support you?   On that occasion you had a sum total 
of ZERO people come forward with evidence or support for your accusations, and 
on the other hand you did get some people who said that I had been honest, 
technically accurate, willing to admit mistakes, never gratuitously insulting 
and always ready to take the time to address any questions in a prompt and 
thorough manner.

Does it not matter to you that you failed on that previous occasion? How 
many times will you repeat this before giving up?

Now, under other circumstances I would ask you to provide some evidence for 
these allegations, and then I'd take some time to examine that evidence with 
you.  However, my previous experience of examining your accusations is that 
your comprehension of the subject is so poor that you quickly tangle yourself 
up in a confusing web of red herrings, non sequiteurs and outright falsehoods, 
and then you jump out of the wreckage of the discussion holding a piece of 
abject nonsense in your fist, screaming "Victory!  I have proved him wrong!".

When you have done that in the past, there has been nothing left for I and 
the other sensible people on this list to do except shake our heads and give up 
trying to explain anything to you.

Consult an outside expert, if you dare.  You will get an unpleasant 
surprise.





Richard Loosemore







Ed Porter wrote:

  Richard,

  I don't think any person on this list has been as insulting of the ideas 
of
  others as much you.  You routinely describe other people's ideas as
  "rubbish" or in similarly contemptuous terms, often with no clear
  justification, and often when those you insult have not been previously
  insulting you. 
  So you have no right to be self righteous.

  And if you are at all concerned with honesty and truth --- rather than
  personal pomposity --- you would listen to what I and many others on this
  list have said about how often you have been clearly wrong, and how often
  your arguments have been dishonest.

  Richard, I think you are an intelligent guy.  It is a shame your
  intelligence is not freed from the childishness, and neediness, and
  dishonesty of your ego.

  Ed Porter

  -Original Message-
  From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Saturday, August 
02, 2008 6:23 PM
  To: agi@v2.listbox.com
  Subject: Re: [agi] EVIDENCE RICH

Re: [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Mark Waser

I don't notice rudeness so much, but content-free posts (and posters
who don't learn) are a problem on this list. Low signal-to-noise
ratio. I'd say you are too tolerant in avoiding moderation, but
moderation is needed for content, not just "politeness".


Normally I try to avoid "me too" posts -- but for those who felt my last 
e-mail was too long, this is the essence of my argument (and very well 
expressed).


- Original Message - 
From: "Vladimir Nesov" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, August 03, 2008 8:25 AM
Subject: Re: [agi] META: do we need a stronger "politeness code" on this 
list?




On Sun, Aug 3, 2008 at 7:47 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:


I think Ed's email was a bit harsh, but not as harsh as many of Richard's
(which are frequently full of language like "fools", "rubbish" and so 
forth

...).

Some of your emails have been pretty harsh in the past too.

I would be willing to enforce a stronger code of politeness on this list 
if

that is what the membership wants.  I have been told before, in other
contexts, that I tend to be overly tolerant of rude behavior.

Anyone else have an opinion on this?



I don't notice rudeness so much, but content-free posts (and posters
who don't learn) are a problem on this list. Low signal-to-noise
ratio. I'd say you are too tolerant in avoiding moderation, but
moderation is needed for content, not just "politeness".

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-06 Thread Mark Waser

This has been a great thread!

I think a simulated, grounded, embodied approach is the one exception to 
the otherwise correct Chinese Room (CR) argument. It is the keyhole 
through which we must pass to achieve strong AI.


Actually, if you read Searle's original paper, I think that you will find 
that he would agree with you since he is *not* meaning to argue against the 
possibility of strong AI (since he makes repeated references to human as 
machines) but merely against the possibility of strong AI in machines where 
"the operation of the machine is defined solely in terms of computational 
processes over formally defined elements" (which was the current state of 
the art in AI when he was arguing against it -- unlike today where there are 
a number of systems which don't require axiomatic reasoning over formally 
defined elements).  There's also the trick that Chinese Room is 
assumed/programmed to be 100% omniscient/correct in it's required domain.


The distinction that is not clarified in most arguments is that Searle's 
Chinese Room is exactly analogous to an old-style expert system in that it 
is ungrounded and unchanging.  It is only doing pre-programmed symbol 
manipulation.  Most importantly, though, to Searle and his arguments, is the 
fact that the *intention* of the Chinese Room is merely to do symbol 
manipulation according to the pre-defined rules, not to "understand" 
Chinese.


The critical point that most people miss -- and what is really important for 
this list (and why people shouldn't blindly dismiss Searle) is that it is 
*intentionality* that defines "understanding".  If a system has 
goals/intentions and it's actions are modified by the external world (i.e. 
it is grounded), then, to the extent to which it's actions are *effectively* 
modified (as judged in relation to it's intentions) is the extent to which 
it "understands".  The most important feature of an AGI is that it has goals 
and that it modifies it's behavior (and learns) in order to reach them.  The 
Chinese Room is incapable of these behaviors since it has no desires.


Where Searle is normally misconstrued is when people either don't understand 
what he means by "formally defined elements" or don't understand that or how 
and why his argument is limited to them.  Unless you are omniscient, the 
world is not made up of formally defined elements.  Incomplete information 
(not to mention incorrect information) prevents true axiomatic and formal 
reasoning.  If you want to get really pedantic, you certainly could argue 
that our *interface to*/*sensing of* the world can be broken down into 
formally defined elements but lack of omniscience/complete information (i.e. 
seemingly different results under what appear to be the same circumstances) 
means that we and a true AGI cannot simply be symbol manipulators due to our 
lack of omniscience -- and we can't improve unless we have an intention to 
improve/measure against.


(Side comment:  This is also the basis for my disagreement with the claim 
that compression is the same as AGI.  The compression/decompression 
algorithm must effectively be omniscient before it shows truly effective 
behavior and then it is merely a symbol processor of the omniscient 
knowledge -- not an AGI).


Thus, in reality, Searle's argument is not that strong AI is not possible 
but that strong AI requires intentionality which is not displayed in his 
example Chinese Room.


Unfortunately, I have to take a break from the list (why are people 
cheering??).


No cheering at all.  This was a very nice change of pace.

   Mark


- Original Message - 
From: "Terren Suydam" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 06, 2008 2:50 PM
Subject: Re: [agi] Groundless reasoning --> Chinese Room




Abram,

I think a simulated, grounded, embodied approach is the one exception to 
the otherwise correct Chinese Room (CR) argument. It is the keyhole 
through which we must pass to achieve strong AI.


The Novamente example I gave may qualify as such an exception (although 
the hybrid nature of grounded and ungrounded knowledge used in the design 
is a question mark for me), and does not invalidate the arguments against 
ungrounded approaches.


The CR argument works for ungrounded approaches, because without 
grounding, the symbols to be manipulated have no meaning, except within an 
external context that is totally independent of and inaccessible to the 
processing engine.


I believe for this to be further constructive, you have to show either 1) 
how an ungrounded symbolic approach does not apply to the CR argument, or 
2) why, specifically, the argument fails to show that ungrounded 
approaches cannot achieve comprehension.


Unfortunately, I have to take a break from the list (why are people 
cheering??).  I will answer any further posts addressed to me in due time, 
but I have other commitments for the time being.


Terren

--- On Wed, 8/6/08, Abram Demski <[EMAIL PROTECTED]> wrote:



Re: [agi] Groundless reasoning --> Chinese Room

2008-08-06 Thread Mark Waser
But it's a preaching to the choir argument: Is there anything more to the 
argument than the intuition that automatic manipulation cannot create 
understanding? I think it can, though I have yet to show it.


Searle answers that exact question in his paper by saying "Because the 
formal symbol manipulations by themselves don't have any intentionality; 
they are quite meaningless; they aren't even symbol manipulations, since the 
symbols don't symbolize anything. In the linguistic jargon, they have only a 
syntax but no semantics." [Searle (1980)]


But I know of no definition of "comprehension" that is impossible to 
create using a program or a Chinese Room -- of course, I don't know /any/ 
complete definition of "comprehension," and maybe when I do, it will have 
the feature you believe it has.


I used to get hung up on this point as well -- but then I realized that the 
Chinese Room (as opposed to many current AGI programs) has no provisions for 
modifying itself or intentionality.  This is why a Chinese Room will never 
be a strong AI but a program which does have goals/intentionality and the 
capability to learn and modify itself can.


   Mark

P.S.  Thanks for the great clarity of thought and expression . . . . it made 
answering much easier. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-06 Thread Mark Waser

Excellent post!


The grounding
doesn't have to be created by the AGI UNLESS the model is created or 
emerges

from the AGI itself.


My argument would be that the AGI would then still have to ground 
itself/it's understanding to the model (and then the model would effectively 
be just a direct linkage to the world).



- Original Message - 
From: "David Clark" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 06, 2008 3:57 PM
Subject: RE: [agi] Groundless reasoning --> Chinese Room



I got the following quote from Wikipedia under "understanding".

"According to the independent socionics researcher Rostislav Persion:

In order to test one's understanding it is necessary to present a question
that forces the individual to demonstrate the possession of a model, 
derived
from observable examples of that model's production or potential 
production
(in the case that such a model did not exist before hand). Rote 
memorization

can present an illusion of understanding, however when other questions are
presented with modified attributes within the query, the individual cannot
create a solution due to a lack of a deeper representation of reality."

There are many levels of "understanding" but I think it is wrong to 
believe

that understanding can be only in the form found in human beings.  If a
person believed otherwise, then "de facto", no computer program could ever
have any understanding.  Given that assumption, the above quote states 
that
a model of something conveys understanding.  The more complex the model, 
the
better the understanding.  He specifically rules out rote memorization as 
a

type of understanding.  I agree.  To my mind, this means that rule based
systems (no matter the number of rules) can never be considered to
understand anything and I think the Chinese room experiment talks to this
point.  Models aren't just a type of memorizing and are not just a bunch 
of

symbols that are defined by other symbols even though at the micro level,
computers definitely just manipulate symbols.

If the models are based on the real world grounding of the person who
programs a model, doesn't this mean that grounding can occur if 1) a model
is used instead of just rules or examples and 2) if the model includes
diagrams and enough variables so that the model can be explored (maybe in
ways not thought of by the programmer)?  No computer program can ever be
expected to experience the world through human eyes (unless the model has
been uploaded from a human) but does that negate the possibly of
understanding by a non-human entity?  If humans can accurately program
models that relate directly to the real world and reality, then why 
couldn't
an AI use this model to manipulate things in the real world.  The 
grounding
doesn't have to be created by the AGI UNLESS the model is created or 
emerges

from the AGI itself.

-- David Clark


-Original Message-
From: Terren Suydam [mailto:[EMAIL PROTECTED]
Sent: August-06-08 8:24 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Groundless reasoning --> Chinese Room


Harry,

--- On Wed, 8/6/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
> I'll take a stab at both of these...
>
> The Chinese Room to me simply states that understanding
> cannot be
> decomposed into sub-understanding pieces. I don't see
> it as addressing
> grounding, unless you believe that understanding can only
> come from the
> outside world, and must become part of the system as atomic
> pieces of
> understanding. I don't see any reason to think that,
> but proving it is
> another matter -- proving negatives is always difficult.

The argument is only implicitly about the nature of understanding. It
is explicit about the agent of understanding. It says that something
that moves symbols around according to predetermined rules - if that's
all it's doing - has no understanding. Implicitly, the assumption is
that understanding must be grounded in experience, and a computer
cannot be said to be experiencing anything.

It really helps here to understand what a computer is doing when it
executes code, and the Chinese Room is an analogy to that which makes a
computer's operation expressible in terms of human experience -
specifically, the experience of incomprehensible symbols like Chinese
ideograms. All a computer really does is apply rules determined in
advance to manipulate patterns of 1's and 0's. No comprehension is
necessary, and invoking that at any time is a mistake.

Fortunately, that does not rule out embodied AI designs in which the
agent is simulated. The processor still has no understanding - it just
facilitates the simulation.

> As to philosophy, I tend to think of it's relationship
> to AI as somewhat
> the same as alchemy's relationship to chemistry. That
> is, it's one of
> the origins of the field, and has some valid ideas, but it
> lacks the
> hard science and engineering needed to get things actually
> working. This
> is admittedly perhaps a naive view, and reflects the
> traditional

Re: [agi] Groundless reasoning --> Chinese Room

2008-08-06 Thread Mark Waser

So you are arguing that a computer program can not be defined solely
in terms of computational processes over formally defined elements?


No, I said nothing of the sort.  I said that Searle said (and I agree) that 
a computer program that *only* manipulated formally defined elements without 
intention or altering itself could not reach strong AI.



Computers could react to and interact with input back in the day when
Searle wrote his book.


Yes.  But the Chinese Room does *not* alter itself in response to input or 
add to it's knowledge.



A computer program is a computational process over formally defined
elements even if is  able to build complex and sensitive structures of
knowledge about its IO data environment through its interactions with
it.


Yes.  This is why I believe that a computer program can achieve strong AI.


This is a subtle argument that cannot be dismissed with an appeal
to a hidden presumption of the human dominion over understanding or by
fixing it to some primitive theory about AI which was unable to learn
through trial and error.


I was not dismissing the argument and certainly not making a presumption of 
human dominion over understanding.  Quite the opposite in fact.  I'm not 
quite sure why you believe that I did.  Could you tell me which of my 
phrases caused you to believe that I did?


   Mark

- Original Message - 
From: "Jim Bromer" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 06, 2008 7:32 PM
Subject: Re: [agi] Groundless reasoning --> Chinese Room



On Wed, Aug 6, 2008 at 6:11 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

This has been a great thread!

Actually, if you read Searle's original paper, I think that you will find
that he... is *not* meaning to argue against the
possibility of strong AI (since he makes repeated references to human as
machines) but merely against the possibility of strong AI in machines 
where

"the operation of the machine is defined solely in terms of computational
processes over formally defined elements" (which was the current state of
the art in AI when he was arguing against it -- unlike today where there 
are

a number of systems which don't require axiomatic reasoning over formally
defined elements).  There's also the trick that Chinese Room is
assumed/programmed to be 100% omniscient/correct in it's required domain.


So you are arguing that a computer program can not be defined solely
in terms of computational processes over formally defined elements?
Computers could react to and interact with input back in the day when
Searle wrote his book.
A computer program is a computational process over formally defined
elements even if is  able to build complex and sensitive structures of
knowledge about its IO data environment through its interactions with
it.  This is a subtle argument that cannot be dismissed with an appeal
to a hidden presumption of the human dominion over understanding or by
fixing it to some primitive theory about AI which was unable to learn
through trial and error.
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser

Argh!  Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--"intentionality".  It is different
from the common use of intending as in aiming to do something or intention
as a goal.  (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).


Yes and no.  Merely having goals does not require intentionality; however, 
in order to most effectively fulfill goals *does* (I believe) require 
intentionality.  Without intentionality (which in the philosophical sense is 
basically the same as grounding), you can only fulfill your goals by 
accident -- OR -- by someone else's design/intentionality.  A true AGI must 
have it's own intentionality (and groundedness to succeed in it's 
intentionality without someone else's intentionality taking over).  Searle's 
Chinese Room does not have a goal much less groundedness or 
intentionality -- but I believe that we can program a machine so that it has 
all three (after all, as Searle says, aren't we all just biological 
machines?).



The Chinese room argument is pretty simple, and it doesn't really try to
do too much.  It's really just all about how you can manipulate symbols,
but you might not get any real meaning because the symbols aren't really
referring to anything.  Searle also says it's trivially true that machines
can possibly understand things because we do and we're machines.  It's
just formal systems that have this problem.


Just copying the above for everyone else again.  I agree completely.

   Mark


- Original Message - 
From: <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 07, 2008 2:57 AM
Subject: Re: [agi] Groundless reasoning --> Chinese Room



Argh!  Are you all making the mistake I think you are making? Searle is
using a technical term in philosophy--"intentionality".  It is different
from the common use of intending as in aiming to do something or intention
as a goal.  (Here's s wiki http://en.wikipedia.org/wiki/Intentionality).
The sense that Seale is using is roughly how things like words (but it
could just be finger pointing) can refer to other things.  I see the wiki
uses the word "aboutness".

I have to admit I'm pretty influenced by Searle.  I've listened to his
lectures on philosophy of mind from the Teaching Company.  He actually
came to U of M and gave a lecture in the Star Wars Senate room where we
had AGI-08.  This was during the semester when the cognitive science
seminar there was about the symbol grounding problem.  I didn't go to the
seminar much, so I didn't see what they came up with.

The Chinese room argument is pretty simple, and it doesn't really try to
do too much.  It's really just all about how you can manipulate sysmbols,
but you might not get any real meaning because the symbols aren't really
referring to anything.  Searle also says it's trivially true that machines
can possibly understand things because we do and we're machines.  It's
just formal systems that have this problem.
andi


Mark Waser wrote:

 The critical point that most people miss -- and what is really
 important for this list (and why people shouldn't blindly dismiss
 Searle) is that it is *intentionality* that defines "understanding".
 If a system has goals/intentions and it's actions are modified by the
 external world (i.e. it is grounded), then, to the extent to which
 it's actions are *effectively* modified (as judged in relation to
 it's intentions) is the extent to which it "understands".  The most
 important feature of an AGI is that it has goals and that it modifies
 it's behavior (and learns) in order to reach them.  The Chinese Room
 is incapable of these behaviors since it has no desires.


Harry Chesley replied:

I think this is an excellent point, so long as you're careful to define
"intention" simply in terms of goals that the system is attempting to
satisfy/maximize, and not in terms of conscious desires. As you point
out, the former provides a context in which to define understanding and
to measure it. The latter leads off into further undefined terms and
concepts -- I mention this rather than just agreeing outright mainly
because of your use of the word "desire" in the last sentence, which
/could/ be interpreted anthropomorphically.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser

Hi Jim,

You seem to think that I disagree with Searle.  I do not.  Sorry for the 
lack of clarity.



Your point was that Searle was thinking of a program that only used
formal symbols.  (That's OK).  However, you then went on as if a
formal symbol system had to be closed and logically sound. (Not true.)


I think that we're getting into a definitional problem of a "formal symbol 
system" -- but that's ok because that's sort of a side-track.  The main 
point is whether or not the system is closed and logically sound/consistent.



I am trying to explain that
there is something more to be learned.


I agree.


The apparent paradox can be
reduced to the never ending deterministic vs free will argument.


Again, I agree but I don't believe that determinism vs. free will is really 
a paradox (heresy!).  You are deterministic because you will do what your 
nature and history inclines you to do.  You have free will because you will 
do what (your nature makes) you wish to do.


   Mark

- Original Message - 
From: "Jim Bromer" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 07, 2008 8:55 AM
Subject: Re: [agi] Groundless reasoning --> Chinese Room



On Wed, Aug 6, 2008 at 8:33 PM, Mark Waser <[EMAIL PROTECTED]> wrote:


I was not dismissing the argument and certainly not making a presumption 
of

human dominion over understanding.  Quite the opposite in fact.  I'm not
quite sure why you believe that I did.  Could you tell me which of my
phrases caused you to believe that I did?


Well a precise analysis of which of your phrases in caused me to
believe that you were dismissing the argument isn't likely to be very
useful because it can quickly become a discussion of diminishing
returns.

Your point was that Searle was thinking of a program that only used
formal symbols.  (That's OK).  However, you then went on as if a
formal symbol system had to be closed and logically sound. (Not true.)

To correct myself, my point is that Searle's thought experiment cannot
be dismissed with a hidden presumption that machines cannot
'understand' (a system of symbol manipulation cannot 'understand') but
it also cannot be dismissed with declaration that Searle was only
thinking of a closed, logically-sound system of symbolic reference.
Searle's thoughts on the subject can provide some insight, but the
thought experiment cannot be dismissed easily regardless of Searle's
intention.


So you are arguing that a computer program can not be defined solely
in terms of computational processes over formally defined elements?


No, I said nothing of the sort.  I said that Searle said (and I agree) 
that
a computer program that *only* manipulated formally defined elements 
without

intention or altering itself could not reach strong AI.


But, that is where you dismiss the essence of the argument.  An
acceptance of the apparent paradox : a machine can only be programmed
to react but a machine that was able to learn by interacting with the
IO data environment and which could exhibit effective use of the
knowledge could be said to 'understand' its IO data environment to
some degree.

I mostly agree with your point of view, and I am not actually saying
that your technical statements are wrong.  I am trying to explain that
there is something more to be learned.  The apparent paradox can be
reduced to the never ending deterministic vs free will argument.  I
think the resolution of these two paradoxical problems is a necessary
design principle.
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser

I am having trouble expressing myself precisely for some reason.  You
can reduce Searle's Chinese Room problem in a number of ways.  One way
is to see the question of whether machines can 'understand' in the
deterministic vs free will thing.


Mmm.  I'm not sure that I understand what you're getting at.  Clearly the 
root of the problem is exactly what the word "understanding" means.  I would 
argue that machines can "understand" just as much as humans can understand; 
however, that is primarily because I believe that humans actually 
"understand" less than many people believe.



But, I feel that once these kinds of problems are resolved, they have
implications about conducting controlled experiments on learning
methods.


Could you clarify what you mean by "these kinds of problems"?  I think I'm 
not getting what you're intending.



It could turn out that learning is a complex adaptive system in the
SLI sense, or that it has some aspects of that kind of complexity,


I believe that learning is a complex adaptive system but that the range and 
degree of variability can be controlled to be well short of problematic.



since we are talking about AGI programs that are capable of global
access of memory, and which can be (and must be) designed to track
some history of their conceptual development, it is unlikely that the
emergence of 'understanding' is going to be totally inexplicable using
reductionist methods.


I don't see "understanding" as being emergent unless you consider grounding 
emergent.  They are pretty close to one and the same.


- Original Message - 
From: "Jim Bromer" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 07, 2008 11:14 AM
Subject: Re: [agi] Groundless reasoning --> Chinese Room



On Thu, Aug 7, 2008 at 9:42 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi Jim,



The apparent paradox can be
reduced to the never ending deterministic vs free will argument.


Again, I agree but I don't believe that determinism vs. free will is 
really

a paradox (heresy!).  You are deterministic because you will do what your
nature and history inclines you to do.  You have free will because you 
will

do what (your nature makes) you wish to do.

  Mark


I am having trouble expressing myself precisely for some reason.  You
can reduce Searle's Chinese Room problem in a number of ways.  One way
is to see the question of whether machines can 'understand' in the
deterministic vs free will thing.

But, I feel that once these kinds of problems are resolved, they have
implications about conducting controlled experiments on learning
methods.  Because we cannot be sure if a learning method is viably
extensible, we have to discover some fundamental elements of general
learning which can be used in constructions that can be made gradually
made more complicated.  It's my opinion that is feasible to make the
resolution of these apparent paradoxes in concrete programmatic terms.
(That is, they can be made in terms that are closer to programmatic
terms than, for example, some vague references to emergence).

It could turn out that learning is a complex adaptive system in the
SLI sense, or that it has some aspects of that kind of complexity, but
since we are talking about AGI programs that are capable of global
access of memory, and which can be (and must be) designed to track
some history of their conceptual development, it is unlikely that the
emergence of 'understanding' is going to be totally inexplicable using
reductionist methods.  The key here though is that the elements of
thought refers to the program and not some closed set of high level
thoughts.

But even if it turns out that these elements of learning are not
viable in an extensible model, the effort is still worthwhile.

Even though extensible complexity (in the general sense) must be an
important area of study, no one else is even talking about it.
Everyone knows its a problem, but everyone thinks their particular
theory has already solved the problem.  I say it should be the focus
of study and experiment.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser
>> Please Answer: Now how can we really say how this is different from human 
>> understanding?

>> I receive a question, I rack my brain for stored facts, if relevant, and 
>> any experiences I have had if relevant, and respond, either with words or an 
>> action.

The difference comes about then presented with a novel situation.  The Chinese 
Room may be able to handle a *very* closely related situation yet a single 
small difference may throw it (like a single mis-spelled word in a book-sized 
block of text -- please don't harass me about Chinese and pictographs :-)  A 
human being will not be thrown by minor differences since they "understand" 
that space around their known solutions as well as the exact solutions.

  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Thursday, August 07, 2008 2:13 PM
  Subject: Re: [agi] Groundless reasoning --> Chinese Room


Back on the problem of "understanding"

more below

___
James Ratcliff - http://falazar.com
Looking for something...

--- On Wed, 8/6/08, Terren Suydam <[EMAIL PROTECTED]> wrote:

  From: Terren Suydam <[EMAIL PROTECTED]>
  Subject: Re: [agi] Groundless reasoning --> Chinese Room
  To: agi@v2.listbox.com
  Date: Wednesday, August 6, 2008, 1:50 PM


Abram,I think a simulated, grounded, embodied approach is the one exception to 
theotherwise correct Chinese Room (CR) argument. It is the keyhole through 
which wemust pass to achieve strong AI.The Novamente example I gave may qualify 
as such an exception (although thehybrid nature of grounded and ungrounded 
knowledge used in the design is aquestion mark for me), and does not invalidate 
the arguments against ungroundedapproaches.The CR argument works for ungrounded 
approaches, because without grounding,
 thesymbols to be manipulated have no meaning, except within an external 
contextthat is totally independent of and inaccessible to the processing 
engine. --> Meaning and understanding here I dont believe are just a true false 
value.In this instance the Agent WOULD have some level of meaning known, if 
given a database of factsabout cats it would be able to answer some questions 
about cats, and woudl understand cats to a certain extent.I believe for this to 
be further constructive, you have to show either 1) howan ungrounded symbolic 
approach does not apply to the CR argument, or 2) why,specifically, the 
argument fails to show that ungrounded approaches cannotachieve
 comprehension.Unfortunately, I have to take a break from the list (why are 
peoplecheering??).  I will answer any further posts addressed to me in due 
time, but Ihave other commitments for the time 
being.Terren---James 
Reply1. Given that a Chinese Room VS an AI in a box, the agent replying to the 
chinese questionshas no "understanding" of chinese.  To all extents and 
purposes it is replying in a coherent way to all questions, and by the Turing 
test is unable to be different acting than ahuman.  That meets my
 burden of being an AGI, if it replies always in reasonable manner.Whether it 
understands anything or not seems to be a totally different question.2. 
Understanding, using any of the definitions, seems to be judgeable on a scale, 
emphasis on judgeable, in that there is no measure of understanding that can be 
done in a vacuum.So to say, does the AGI understand is nonsensical without that 
context.In school, we determine understanding by testing, and asking questions, 
and performing tasks.So an AGI it would seem would need to be handled in a 
similar fashion.A un-grounded AGI without a body when quizzed about certain 
items would show a certain level of understanding depending on the depth and 
correctness of its knowledgebases and routines.Is it truly "understanding" the 
concept any further than reading it, and answering the question?A grounded AGI 
may perform better because it is able to interact and gather more and better 
details about the topics.But in the end the grounded AGI simply has a larger 
lookup database of experiences it can use.When handed a question on a sheet of 
paper, it looks it up in the larger DB.A embodied robot AGI would have the 
added ability of interacting physically with the objects, therefor when handed 
a cup, it could look-up what to do with it, and "understand" that it could fill 
it up with a liquid, and follow a plan for that.In this sense it would be able 
to "prove" to an outsider that it understood what a cup was.Please Answer: Now 
how can we really say how this is different from human understanding?I receive 
a question, I rack my brain for stored facts, if relevant, and any experiences 
I have had if relevant, and respond, either with words or an action. 



--
agi | Archives  | Modify 

Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser
:-)  The Chinese Room can't pass the Turing test for exactly the reason you 
mention.

>> Well in the Chinese Room case I think the "book of instructions" is 
>> infinitely large to handle all cases, so things like misspellings and stuff 
>> would be included and I dont think that was meant to be a difference.

:-)  Arguments that involved infinities are always problematical.  Personally, 
I think that the intention was that you should accept the more reasonable and 
smaller infinity of just correct cases as being more in line with what Searle 
intended.  This is obviously just speculation however and YMMV.

>> With the chinese room, we arent doing any reasoning really, just looking up 
>> answers according to instructions but given that, how do we determine 
>> "understanding"?

This was Searle's entire point.  Mindless lookup is *not* understanding.  It 
may *look* like understanding from the outside (and if you have an infinitely 
large book that also has mis-spellings, etc. -- you'll never be able to prove 
otherwise), but it is merely manipulation, not intention.

  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Thursday, August 07, 2008 3:10 PM
  Subject: Re: [agi] Groundless reasoning --> Chinese Room


>No, I said nothing of the sort.  I said that Searle said (and I agree) that >a 
>computer program that *only* manipulated formally defined elements without 
>>intention or altering itself could not reach strong AIIs this part of the 
>Chinese Room?  I looked and couldnt find that restriction.  It would seem that 
>to pass the Turing test, it would at least need to be able to add to its data,
otherwise something as simple as the below would fail the Turing Test.

Q: My name is James.
AI: OK
Q: What is my name?
AI:  *dont know didnt store it, or something like?*

I read that the CR agent receives the input, looks up in a rulebook 
what to do, does it, 
and returns the output, correct?
It seems that there is room for any action such as changing the 
rulebook in the middle of the process, maybe to add a synonym for a chinese 
word say.

James

___
    James Ratcliff - http://falazar.com
Looking for something...


  From: Mark Waser <[EMAIL PROTECTED]>

> So you are arguing that a computer program can not be defined solely> in 
> terms of computational processes over formally defined elements?No, I said 
> nothing of the sort.  I said that Searle said (and I agree) that a computer 
> program that *only* manipulated formally defined elements without intention 
> or altering itself could not reach strong AI.> Computers could react to and 
> interact with input back in the day when> Searle wrote his book.Yes.  But the 
> Chinese Room does *not* alter itself in response to input or add to it's 
> knowledge.> A computer program is a computational process over formally 
> defined> elements even if is  able to build complex and sensitive structures 
> of> knowledge about its
 IO data environment through its interactions with> it.Yes.  This is why I 
believe that a computer program can achieve strong AI.> This is a subtle 
argument that cannot be dismissed with an appeal> to a hidden presumption of 
the human dominion over understanding or by> fixing it to some primitive theory 
about AI which was unable to learn> through trial and error.I was not 
dismissing the argument and certainly not making a presumption of human 
dominion over understanding.  Quite the opposite in fact.  I'm not quite sure 
why you believe that I did.  Could you tell me which of my phrases caused you 
to believe that I did?Mark- Original Message - From: "Jim Bromer" 
<[EMAIL PROTECTED]>To: Sent: Wednesday, August 06, 2008 
7:32 PMSubject: Re: [agi] Groundless reasoning --> Chinese Room> On Wed, Aug 6,
 2008 at 6:11 PM, Mark Waser <[EMAIL PROTECTED]>wrote:>> This has been a great 
thread!>>>> Actually, if you read Searle's original paper, I think that youwill 
find>> that he... is *not* meaning to argue against the>> possibility of strong 
AI (since he makes repeated references to humanas>> machines) but merely 
against the possibility of strong AI in machines >> where>> "the operation of 
the machine is defined solely in terms ofcomputational>> processes over 
formally defined elements" (which was the currentstate of>> the art in AI when 
he was arguing against it -- unlike today wherethere >> are>> a number of 
systems which don't require axiomatic reasoning overformally>> defined 
elements).  There's also the trick that Chinese Room is>

Re: [agi] Groundless reasoning --> Chinese Room

2008-08-07 Thread Mark Waser
>> Thats my thought, but at what point is the manipulation responses just as 
>> good as "human" responses if they are both in a black box, to the 
>> outside observer, they are identical.  Then they both could be said to have 
>> an equal "understanding" about what they are doing. Given that the grading 
>> is always done from an outside source

Yes -- but the whole problem here is caused by the "infinitely large" 
instruction book assumption.  Once you come off of that assumption, which is 
impossible anyways, and fall back to merely extremely large, which is 
reasonable, then you also get the distinction that you need.


  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Thursday, August 07, 2008 4:27 PM
  Subject: Re: [agi] Groundless reasoning --> Chinese Room


>> With the chinese room, we arent doing any reasoning really, just 
looking up answers according to instructions but given that, how do we 
determine "understanding"?

>This was Searle's entire point.  Mindless lookup is *not* 
understanding.  It may *look* like >understanding from the outside (and if you 
have an infinitely large book that also has >mis-spellings, etc. -- you'll 
never be able to prove otherwise), but it is merely manipulation, >not 
intention.

Thats my thought, but at what point is the manipulation responses just 
as good as "human" responses if they are both in a black box, to the 
outside observer, they are identical.  Then they both could be said to have an 
equal "understanding" about what they are doing. Given that the grading is 
always done from an outside source.


--- On Thu, 8/7/08, Mark Waser <[EMAIL PROTECTED]> wrote:

  From: Mark Waser <[EMAIL PROTECTED]>
  Subject: Re: [agi] Groundless reasoning --> Chinese Room
  To: agi@v2.listbox.com
  Date: Thursday, August 7, 2008, 2:49 PM


  :-)  The Chinese Room can't pass the Turing test for exactly the 
reason you mention.

  >> Well in the Chinese Room case I think the "book of instructions" 
is infinitely large to handle all cases, so things like misspellings and stuff 
would be included and I dont think that was meant to be a difference.

  :-)  Arguments that involved infinities are always problematical.  
Personally, I think that the intention was that you should accept the more 
reasonable and smaller infinity of just correct cases as being more in line 
with what Searle intended.  This is obviously just speculation however and YMMV.

  >> With the chinese room, we arent doing any reasoning really, just 
looking up answers according to instructions but given that, how do we 
determine "understanding"?

  This was Searle's entire point.  Mindless lookup is *not* 
understanding.  It may *look* like understanding from the outside (and if you 
have an infinitely large book that also has mis-spellings, etc. -- you'll never 
be able to prove otherwise), but it is merely manipulation, not intention.

- Original Message - 
From: James Ratcliff 
To: agi@v2.listbox.com 
Sent: Thursday, August 07, 2008 3:10 PM
Subject: Re: [agi] Groundless reasoning --> Chinese Room


>No, I said nothing of the sort.  I said that Searle said (and I agree) that >a 
>computer program that *only* manipulated formally defined elements without 
>>intention or altering itself could not reach strong AIIs this part of the 
>Chinese Room?  I looked and couldnt find that restriction.  It would seem that 
>to pass the Turing test, it would at least need to be able to add to its data,
  otherwise something as simple as the below would fail the 
Turing Test.

  Q: My name is James.
  AI: OK
  Q: What is my name?
  AI:  *dont know didnt store it, or something like?*

  I read that the CR agent receives the input, looks up in a 
rulebook what to do, does it, 
  and returns the output, correct?
  It seems that there is room for any action such as changing 
the rulebook in the middle of the process, maybe to add a synonym for a chinese 
word say.

      James

  ___
  James Ratcliff - http://falazar.com
  Looking for something...


From: Mark Waser <[EMAIL PROTECTED]>

> So you are arguing that a computer program can not be defined solely> in 
> terms of computational processes over formally defined elements?No, I said 
> nothing of the sort.  I said that Searle said (and I agree) that a computer 
> program

Re: [agi] Groundless reasoning --> Chinese Room

2008-08-08 Thread Mark Waser
>> The person believes his decision are now guided by free will, but truly they 
>> are still guided by the book: if the book gives him the wrong meaning of a 
>> word, he will make a mistake when answering a Chinese speaker

The translations are guided by the book but his answers certainly are not.  He 
can make a mistranslation but that is a mechanical/non-understanding act 
performed on the original act of deciding upon his answer.

>> The main difference in this second context is that the contents of the book 
>> were transferred to the brain of the person

No.  The main difference is that the person can choose what to answer (as 
opposed to the Chinese Room where responses are dictated by the input and no 
choice is involved).

  - Original Message - 
  From: Valentina Poletti 
  To: agi@v2.listbox.com 
  Sent: Friday, August 08, 2008 6:18 AM
  Subject: Re: [agi] Groundless reasoning --> Chinese Room


  Let me ask about a special case of this argument.

  Suppose now the book that the guy in the room holds is a chinese-teaching 
book for english speakers. The guy can read it for as long as he wishes, and 
can consult it in order to give the answers to the chinese speakers interacting 
with him.

  In this situation, although the setting has not changed much physically 
speaking, the guy can be said to use his free will rather than a controlled 
approach to answer questions. But is that true? The amount of information 
exchanged is the same. The energy used is the same. The person believes his 
decision are now guided by free will, but truly they are still guided by the 
book: if the book gives him the wrong meaning of a word, he will make a mistake 
when answering a chinese speaker. So his free will is just an illusion.

  The main difference in this second context is that the contents of the book 
were transferred to the brain of the person, and these contents were compressed 
(rather than consulting each case for what to do, he has been taught general 
rules on what to do). The person has acquired understanding of chinese from the 
book? No, he has acquired information from the book. Information alone is not 
enough for understanding to exist. There must be energy processing it.

  By this definition a machine can understand.



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
>> All rational goal-seeking agents must have a mental state of maximum utility 
>> where any thought or perception would be unpleasant because it would result 
>> in a different state.

I'd love to see you attempt to prove the above statement.

What if there are several states with utility equal to or very close to the 
maximum?  What if the utility of the state decreases the longer that you are in 
it (something that is *very* true of human beings)?  What if uniqueness raises 
the utility of any new state sufficient that there will always be states that 
are better than the current state (since experiencing uniqueness normally 
improves fitness through learning, etc)?

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: 
Valentina Poletti <[EMAIL PROTECTED]> wrote:
> I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), "A Gentle Introduction to The Universal 
Algorithmic Agent {AIXI}",
in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

2. Legg, Shane, and Marcus Hutt

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
>> It is up to humans to define the goals of an AGI, so that it will do what we 
>> want it to do.

Why must we define the goals of an AGI?  What would be wrong with setting it 
off with strong incentives to be helpful, even stronger incentives to not be 
harmful, and let it chart it's own course based upon the vagaries of the world? 
 Let it's only hard-coded goal be to keep it's satisfaction above a certain 
level with helpful actions increasing satisfaction, harmful actions heavily 
decreasing satisfaction; learning increasing satisfaction, and satisfaction 
naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's 
additional incentives of self-defense and procreation).  The real trick of the 
matter is defining helpful and harmful clearly but everyone is still mired five 
steps before that.



  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: Valentina Poletti <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: 
Valentina Poletti <[EMAIL PROTECTED]> wrote:
> I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

References

1. Hutter, Marcus (2003), "A Gentle Introduction to The

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be 
harmful, learn, and keep moving.


Learn, all by itself, is going to generate an infinite number of subgoals. 
Learning subgoals will be picked based upon what is most likely to learn the 
most while not being harmful.


(and, by the way, be helpful and learn should both generate a 
self-protection sub-goal  in short order with procreation following 
immediately behind)


Arguably, be helpful would generate all three of the other goals but 
learning and not being harmful without being helpful is a *much* better 
goal-set for a novice AI to prevent "accidents" when the AI thinks it is 
being helpful.  In fact, I've been tempted at times to entirely drop the be 
helpful since the other two will eventually generate it with a lessened 
probability of trying-to-be-helpful accidents.


Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single 
goal intelligence is likely to be lethal even if that goal is to help 
humanity.


Learn, do no harm, help.  Can anyone come up with a better set of goals? 
(and, once again, note that learn does *not* override the other two -- there 
is meant to be a balance between the three).


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
"solved" yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
It is up to humans to define the goals of an AGI, so that it will do 
what

we want it to do.


Why must we define the goals of an AGI?  What would be wrong with setting 
it
off with strong incentives to be helpful, even stronger incentives to not 
be

harmful, and let it chart it's own course based upon the vagaries of the
world?  Let it's only hard-coded goal be to keep it's satisfaction above 
a
certain level with helpful actions increasing satisfaction, harmful 
actions

heavily decreasing satisfaction; learning increasing satisfaction, and
satisfaction naturally decaying over time so as to promote action . . . .

Seems to me that humans are pretty much coded that way (with evolution's
additional incentives of self-defense and procreation).  The real trick 
of

the matter is defining helpful and harmful clearly but everyone is still
mired five steps before that.


- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re:

[agi] The Necessity of Embodiment))
An AGI will not design its goals. It is up to humans to define the goals 
of

an AGI, so that it will do what we want it to do.

Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not
successful, then AGI will be useless at best and dangerous at worst. If 
we

are successful, then we are doomed because human goals evolved in a
primitive environment to maximize reproductive success and not in an
environment where advanced technology can give us whatever we want. AGI 
will

allow us to connect our brains to simulated worlds with magic genies, or
worse, allow us to directly reprogram our brains to alter our memories,
goals, and thought processes. All rational goal-seeking agents must have 
a

mental state of maximum utility where any thought or perception would be
unpleasant because it would result in a different state.

-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Valentina Poletti <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)

Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong

AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Vlad,

   This is called a strawman argument.  It is where you make a ridiculous 
claim about what I meant and then proceed to shoot it down.  Eliezer has 
done it for years and has single-handedly been responsible for an incredible 
number of people simply giving up in disgust.


   I said nothing and assume nothing about implementation.  The fact that 
you're jumping to implementation at this stage is just plain incorrect. 
Maybe you should analyze exactly why you have such a need to prove people 
wrong that you have to put words into their mouths and ideas into their 
heads in order to be able to do so.



- Original Message - 
From: "Vladimir Nesov" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 27, 2008 1:31 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be
harmful, learn, and keep moving.

Learn, all by itself, is going to generate an infinite number of 
subgoals.
Learning subgoals will be picked based upon what is most likely to learn 
the

most while not being harmful.

(and, by the way, be helpful and learn should both generate a
self-protection sub-goal  in short order with procreation following
immediately behind)

Arguably, be helpful would generate all three of the other goals but
learning and not being harmful without being helpful is a *much* better
goal-set for a novice AI to prevent "accidents" when the AI thinks it is
being helpful.  In fact, I've been tempted at times to entirely drop the 
be

helpful since the other two will eventually generate it with a lessened
probability of trying-to-be-helpful accidents.

Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single
goal intelligence is likely to be lethal even if that goal is to help
humanity.

Learn, do no harm, help.  Can anyone come up with a better set of goals?
(and, once again, note that learn does *not* override the other two --  
there

is meant to be a balance between the three).



And AGI will just read the command, "help", 'h'-'e'-'l'-'p', and will
know exactly what to do, and will be convinced to do it. To implement
this "simple" goal, you need to somehow communicate its functional
structure in the AGI, this won't just magically happen. Don't talk
about AGI as if it was a human, think about how exactly to implement
what you want. Today's rant on Overcoming Bias applies fully to such
suggestions ( http://www.overcomingbias.com/2008/08/dreams-of-ai-de.html
).


--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or 
not.  Doing so would be equivalent to setting a goal of learning.



-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


   See . . . all you've done here is pushed goal-setting to 
pleasure-setting . . . .


= = = = =

   Further, if you judge goodness by pleasure, you'll probably create an 
AGI whose shortest path-to-goal is to wirehead the universe (which I 
consider to be a seriously suboptimal situation - YMMV).





- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-"Good" is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-"Self" is the entity that causes my actions.
-An entity with properties similar to "self" is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if "good" is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?


Actually, my description gave the AGI four goals: be helpful, don't be
harmful, learn, and keep moving.

Learn, all by itself, is going to generate an infinite number of 
subgoals.
Learning subgoals will be picked based upon what is most likely to learn 
the

most while not being harmful.

(and, by the way, be helpful and learn should both generate a
self-protection sub-goal  in short order with procreation following
immediately behind)

Arguably, be helpful would generate all three of the other goals but
learning and not being harmful without being helpful is a *much* better
goal-set for a novice AI to prevent "accidents" when the AI thinks it is
being helpful.  In fact, I've been tempted at times to entirely drop the 
be

helpful since the other two will eventually generate it with a lessened
probability of trying-to-be-helpful accidents.

Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single
goal intelligence is likely to be lethal even if that goal is to help
humanity.

Learn, do no harm, help.  Can anyone come up with a better set of goals?
(and, once again, note that learn does *not* override the other two --  
there

is meant to be a balance between the three).

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

I agree that we are mired 5 steps before that; after all, AGI is not
"solved" yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!

But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

--Abram

On Wed, Aug 27, 2008 at 11:09 AM, Mark Waser <[EMAIL PROTECTED]> 
wrote:


It is up to humans to define the goals of an AGI, so that it will do
what
we want it to do.


Why must we define the goals of an AGI?  What would be wrong with 
setting

it
off with strong incentives to be helpful, even stronger incentives to 
not

be
harmful, and let it chart it's own course based upon the vagaries of 
the
world?  Let it's only hard-coded goal be to keep it's satisfaction 
above

a
certain level with helpful actions increasing satisfaction, harmful
actions
heavily decreasing satisfaction; learning in

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser
What if the utility of the state decreases the longer that you are in it 
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the 
same state.


I have to laugh.  So you agree that all your arguments don't apply to 
anything that is aware of the passage of time?  That makes them really 
useful, doesn't it.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-27 Thread Mark Waser

Hi,

   I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by "directly observable" that the 
definition is observable and changeable.  If I define good as making all 
humans happy without modifying them, how would the AI wirehead itself?  What 
am I missing here?



So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the correlation 
between external goodness and it's pleasure have to be low?  Why can't 
external goodness directly cause pleasure?  Clearly, it shouldn't believe 
that it's pleasure causes external goodness (that would be reversing cause 
and effect and an obvious logic error).


   Mark

P.S.  I notice that several others answered your wirehead query so I won't 
belabor the point.  :-)



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 27, 2008 3:43 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course). But, goodness cannot be
completely unobservable, or the AI will have no idea what it should
do.

So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure. That
way, the system will go after pleasant things, but won't be able to
fool itself with things that are maximally pleasant. For example, if
it were to consider rewiring its visual circuits to see only
skin-color, it would not like the idea, because it would know that
such a move would make it less able to maximize goodness in general.
(It would know that seeing only tan does not mean that the entire
world is made of pure goodness.) An AI that was trying to maximize
pleasure would see nothing wrong with self-stimulation of this sort.

So, I think that pushing the problem of goal-setting back to
pleasure-setting is very useful for avoiding certain types of
undesirable behavior.

By the way, where does this term "wireheading" come from? I assume
from context that it simply means self-stimulation.

-Abram Demski

On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi,

  A number of problems unfortunately . . . .


-Learning is pleasurable.


. . . . for humans.  We can choose whether to make it so for machines or
not.  Doing so would be equivalent to setting a goal of learning.


-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.


  See . . . all you've done here is pushed goal-setting to 
pleasure-setting

. . . .

= = = = =

  Further, if you judge goodness by pleasure, you'll probably create an 
AGI
whose shortest path-to-goal is to wirehead the universe (which I consider 
to

be a seriously suboptimal situation - YMMV).




- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches 
to

AGI (was Re: [agi] The Necessity of Embodiment))



Mark,

OK, I take up the challenge. Here is a different set of goal-axioms:

-"Good" is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
less-good entity.
-"Self" is the entity that causes my actions.
-An entity with properties similar to "self" is more likely to be good.

Pleasure, unlike goodness, is directly observable. It comes from many
sources. For example:
-Learning is pleasurable.
-A full battery is pleasurable (if relevant).
-Perhaps the color of human skin is pleasurable in and of itself.
(More specifically, all skin colors of any existing race.)
-Perhaps also the sound of a human voice is pleasurable.
-Other things may be pleasurable depending on what we initially want
the AI to enjoy doing.

So, the definition if "good" is highly probabilistic, and the system's
inferences about goodness will depend on its experiences; but pleasure
can be directly observed, and the pleasure-mechanisms remain fixed.

On Wed, Aug 27, 2008 at 12:32 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:


But, how does your description not correspond to giving the AGI the
goals of be

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek


Your second statement copied below not withstanding, I *don't* seek ultimate 
bliss.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


It is not what I want *NOW*.  It may be that once my brain has been altered 
by experiencing it, I may want it *THEN* but that has no relevance to what I 
want and seek now.


These statements are just sloppy reasoning . . . .


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, August 27, 2008 11:05 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))




Mark Waser <[EMAIL PROTECTED]> wrote:


What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human

beings)?
If you are aware of the passage of time, then you are not staying in the
same state.


I have to laugh.  So you agree that all your arguments don't apply to
anything that is aware of the passage of time?  That makes them really
useful, doesn't it.


No, the state of ultimate bliss that you, I, and all other rational, goal 
seeking agents seek is a mental state in which nothing perceptible 
happens. Without thought or sensation, you would be unaware of the passage 
of time, or of anything else. If you are aware of time then you are either 
not in this state yet, or are leaving it.


You may say that is not what you want, but only because you are unaware of 
the possibilities of reprogramming your brain. It is like being opposed to 
drugs or wireheading. Once you experience it, you can't resist.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the "make
humans happy" example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
"goodness" to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new AI 
learn large and difficult definitions is going to be easier and safer than 
specifying them (assuming that the specifications can be grounded in the 
AI's terms)?


I also disagree that the definitions are going to be as large as people 
believe them to be . . . .


Let's take the Mandelbroit set as an example.  It is perfectly specified by 
one *very* small formula.  Yet, if you don't know that formula, you could 
spend many lifetimes characterizing it (particularly if you're trying to 
doing it from multiple blurred and shifted  images :-).


The true problem is that humans can't (yet) agree on what goodness is -- and 
then they get lost arguing over detailed cases instead of focusing on the 
core.


The core definition of goodness/morality and developing a system to 
determine what actions are good and what actions are not is a project that 
I've been working on for quite some time and I *think* I'm making rather 
good headway.



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 9:57 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Hi mark,

I think the miscommunication is relatively simple...

On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi,

  I think that I'm missing some of your points . . . .


Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).


I don't understand this unless you mean by "directly observable" that the
definition is observable and changeable.  If I define good as making all
humans happy without modifying them, how would the AI wirehead itself? 
What

am I missing here?


When I say "directly observable", I mean observable-by-sensation.
"Making all humans happy" could not be directly observed unless the AI
had sensors in the pleasure centers of all humans (in which case it
would want to wirehead us). "Without modifying them" couldn't be
directly observed even then. So, realistically, such a goal needs to
be inferred from sensory data.

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the "make
humans happy" example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
"goodness" to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult




So, the AI needs to have a concept of external goodness, with a weak
probabilistic correlation to its directly observable pleasure.


I agree with the concept of external goodness but why does the 
correlation

between external goodness and it's pleasure have to be low?  Why can't
external goodness directly cause pleasure?  Clearly, it shouldn't believe
that it's pleasure causes external goodness (that would be reversing 
cause

and effect and an obvious logic error).


The correlation needs to be fairly low to allow the concept of good to
eventually split off of the concept of pleasure in the AI mind. The
external goodness can't directly cause pleasure because it isn't
directly detectable. Detection of goodness *through* inference *could*
be taken to cause pleasure; but this wouldn't be much use, because the
AI is already supposed to be maximizing goodness, not pleasure.
Pleasure merely plays the role of offering "hints" about what things
in the world might be good.

Actually, I think the proper probabilistic construction might be a bit
different than simply a "weak correlation"... for one thing, the
probability that goodness causes pleasure shouldn't be set ahead of
time. I'm thinking that likelihood would be more appropriate than
probability... so that it is as if the AI is born with some evidence
for the correlation 

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
"virtual evidence"), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of personal 
belief statements:


1.  Probabilistic correlations are much, *much* more problematical than most 
people are event willing to think about.  They work well with very simple 
examples but they do not scale well at all.  Particularly problematic for 
such correlations is the fact that ethical concepts are generally made up 
*many* interwoven parts and are very fuzzy.  The church of Bayes does not 
cut it for any work where the language/terms/concepts are not perfectly 
crisp, clear, and logically correct.
2.  Statements like "its high-level goal will tend to create normalizing 
subgoals that will regularize its behavior" sweep *a lot* of detail under 
the rug.  It's possible that it is true.  I think that it is much more 
probable that it is very frequently not true.  Unless you do *a lot* of 
specification, I'm afraid that expecting this to be true is *very* risky.



I'll stick to my point about defining "make humans happy" being hard,
though. Especially with the restriction "without modifying them" that
you used.


I think that defining "make humans happy" is impossible -- but that's OK 
because I think that it's a really bad goal to try to implement.


All I need to do is to define learn, harm, and help.  Help could be defined 
as anything which is agreed to with informed consent by the affected subject 
both before and after the fact.  Yes, that doesn't cover all actions but 
that just means that the AI doesn't necessarily have a strong inclination 
towards those actions.  Harm could be defined as anything which is disagreed 
with (or is expected to be disagreed with) by the affected subject either 
before or after the fact.  Friendliness then turns into something like 
asking permission.  Yes, the Friendly entity won't save you in many 
circumstances, but it's not likely to kill you either.


<< Of course, I could also come up with the counter-argument to my own 
thesis that the AI will never do anything because there will always be 
someone who objects to the AI doing *anything* to change the world.-- but 
that's just the absurdity and self-defeating arguments that I expect from 
many of the list denizens that can't be defended against except by 
allocating far more time than it's worth.>>




- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 1:59 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in only as probabilistic correlations (or
"virtual evidence"), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.

I'll stick to my point about defining "make humans happy" being hard,
though. Especially with the restriction "without modifying them" that
you used.

On Thu, Aug 28, 2008 at 12:38 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the "make
humans happy" example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure in a way that would help the AI attach
"goodness" to the right things.) If we are able to write out the
definitions ahead of time, we can directly specify what goodness is
instead. But, I think it is unrealistic to take that approach, since
the definitions would be large and difficult


:-)  I strongly disagree with you.  Why do you believe that having a new 
AI
learn large and difficult definitions is going to be easier and safer 
than

specifying them (ass

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.


Actually, I *do* define good and ethics not only in evolutionary terms but 
as being driven by evolution.  Unlike most people, I believe that ethics is 
*entirely* driven by what is best evolutionarily while not believing at all 
in "red in tooth and claw".  I can give you a reading list that shows that 
the latter view is horribly outdated among people who keep up with the 
research rather than just rehashing tired old ideas.



Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).


Ah, I'm starting to get an idea of what you mean behind normalizing subgoals 
. . . .   Yes, absolutely except that I contend that there is exactly one 
normalizing subgoal (though some might phrase it as two) that is normally 
common to virtually every goal (except in very extreme/unusual 
circumstances).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 4:04 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




Mark,

I still think your definitions still sound difficult to implement,
although not nearly as hard as "make humans happy without modifying
them". How would you define "consent"? You'd need a definition of
decision-making entity, right?

Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain other patterns would be too, because they are a
high-utility part of a larger whole.

Actually, that idea is what made me assert that any goal produces
normalizing subgoals. Survivability helps achieve any goal, as long as
it isn't a time-bounded goal (finishing a set task).

--Abram

On Thu, Aug 28, 2008 at 2:52 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.


Why not wait until a theory is derived before making this decision?

Wouldn't such a theory be a good starting point, at least?


better to put such ideas in only as probabilistic correlations (or
"virtual evidence"), and let the system change its beliefs based on
accumulated evidence. I do not think this is overly risky, because
whatever the system comes to believe, its high-level goal will tend to
create normalizing subgoals that will regularize its behavior.


You're getting into implementation here but I will make a couple of 
personal

belief statements:

1.  Probabilistic correlations are much, *much* more problematical than 
most

people are event willing to think about.  They work well with very simple
examples but they do not scale well at all.  Particularly problematic for
such correlations is the fact that ethical concepts are generally made up
*many* interwoven parts and are very fuzzy.  The church of Bayes does not
cut it for any work where the language/terms/concepts are not perfectly
crisp, clear, and logically correct.
2.  Statements like "its high-level goal will tend to create normalizing
subgoals that will regularize its behavior" sweep *a lot* of detail under
the rug.  It's possible that it is true.  I think that it is much more
probable that it is very frequently not true.  Unless you do *a lot* of
specification, I'm afraid that expecting this to be true is *very* risky.


I'll stick to my point about defining "make humans happy" being hard,
though. Especially with the restriction "without modifying them" that
you used.


I think that defining "make humans happy" is impossible -- but that's OK
because I think that it's a really bad goal to try to implement.

All I need to do is to define learn, harm, and help.  Help could be 
defined
as anything which is agreed to with informed consent by the affected 
subject

both before and after the fact.  Yes, that doesn't cover all actions but
that just means that the AI doesn't necessarily have a strong inclination
towards those actions.  Harm could be defined as anything which is 
disagreed

with (or is expected to be disagreed with) by the affected subject either
before or after the fact.  Friendliness then turn

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-28 Thread Mark Waser
   Parasites are very successful at surviving but they don't have other 
goals.  Try being parasitic *and* succeeding at goals other than survival. 
I think you'll find that your parasitic ways will rapidly get in the way of 
your other goals the second that you need help (or even non-interference) 
from others.


- Original Message - 
From: "Terren Suydam" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





--- On Thu, 8/28/08, Mark Waser <[EMAIL PROTECTED]> wrote:

Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution.  Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in "red in tooth and claw".  I can give you a
reading list that shows that
the latter view is horribly outdated among people who keep
up with the
research rather than just rehashing tired old ideas.


I think it's a stretch to derive ethical ideas from what you refer to as 
"best evolutionarily".  Parasites are pretty freaking successful, from an 
evolutionary point of view, but nobody would say parasitism is ethical.


Terren





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

Hi Terren,

Obviously you need to complicated your original statement "I believe that 
ethics is *entirely* driven by what is best evolutionarily..." in such a 
way that we don't derive ethics from parasites.


Saying that ethics is entirely driven by evolution is NOT the same as saying 
that evolution always results in ethics.  Ethics is 
computationally/cognitively expensive to successfully implement (because a 
stupid implementation gets exploited to death).  There are many evolutionary 
niches that won't support that expense and the successful entities in those 
niches won't be ethical.  Parasites are a prototypical/archetypal example of 
such a niche since they tend to degeneratively streamlined to the point of 
being stripped down to virtually nothing except that which is necessary for 
their parasitism.  Effectively, they are single goal entities -- the single 
most dangerous type of entity possible.



You did that by invoking social behavior - parasites are not social beings


I claim that ethics is nothing *but* social behavior.

So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics.


OK.  How about this . . . . Ethics is that behavior that, when shown by you, 
makes me believe that I should facilitate your survival.  Obviously, it is 
then to your (evolutionary) benefit to behave ethically.


As Matt alluded to before, would you agree that ethics is the result of 
group selection? In other words, that human collectives with certain 
taboos make the group as a whole more likely to persist?


Matt is decades out of date and needs to catch up on his reading.

Ethics is *NOT* the result of group selection.  The *ethical evaluation of a 
given action* is a meme and driven by the same social/group forces as any 
other meme.  Rational memes when adopted by a group can enhance group 
survival but . . . . there are also mechanisms by which seemingly irrational 
memes can also enhance survival indirectly in *exactly* the same fashion as 
the "seemingly irrational" tail displays of peacocks facilitates their group 
survival by identifying the fittest individuals.  Note that it all depends 
upon circumstances . . . .


Ethics is first and foremost what society wants you to do.  But, society 
can't be too pushy in it's demands or individuals will defect and society 
will break down.  So, ethics turns into a matter of determining what is the 
behavior that is best for society (and thus the individual) without unduly 
burdening the individual (which would promote defection, cheating, etc.). 
This behavior clearly differs based upon circumstances but, equally clearly, 
should be able to be derived from a reasonably small set of rules that 
*will* be context dependent.  Marc Hauser has done a lot of research and 
human morality seems to be designed exactly that way (in terms of how it 
varies across societies as if it is based upon fairly simple rules with a 
small number of variables/variable settings.  I highly recommend his 
writings (and being familiar with them is pretty much a necessity if you 
want to have a decent advanced/current scientific discussion of ethics and 
morals).


   Mark

- Original Message - 
From: "Terren Suydam" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 10:54 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was 
Re: [agi] The Necessity of Embodiment))





Hi Mark,

Obviously you need to complicated your original statement "I believe that 
ethics is *entirely* driven by what is best evolutionarily..." in such a 
way that we don't derive ethics from parasites. You did that by invoking 
social behavior - parasites are not social beings.


So from there you need to identify how evolution operates in social groups 
in such a way that you can derive ethics. As Matt alluded to before, would 
you agree that ethics is the result of group selection? In other words, 
that human collectives with certain taboos make the group as a whole more 
likely to persist?


Terren


--- On Thu, 8/28/08, Mark Waser <[EMAIL PROTECTED]> wrote:


From: Mark Waser <[EMAIL PROTECTED]>
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI 
(was Re: [agi] The Necessity of Embodiment))

To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 9:21 PM
Parasites are very successful at surviving but they
don't have other
goals.  Try being parasitic *and* succeeding at goals other
than survival.
I think you'll find that your parasitic ways will
rapidly get in the way of
your other goals the second that you need help (or even
non-interference)
from others.

- Original Message - 
From: "Terren Suydam" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic
approaches to AGI (was
Re: [agi] The Necessity of

Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between 
individuals. It's an emergent dynamic that requires explanation at the 
group level. It's a set of culture-wide rules and taboos - how did they 
get there?


I wasn't explaining ethics with that statement.  I was identifying how 
"evolution operates in social groups in such a way that I can derive ethics" 
(in direct response to your question).


Ethics is a system.  The *definition of ethical behavior* for a given group 
is "an emergent dynamic that requires explanation at the group level" 
because it includes what the group believes and values -- but ethics (the 
system) does not require belief history (except insofar as it affects 
current belief).  History, circumstances, and understanding what a culture 
has the rules and taboos that they have is certainly useful for deriving 
more effective rules and taboos -- but it doesn't alter the underlying 
system which is quite simple . . . . being perceived as helpful generally 
improves your survival chances, being perceived as harmful generally 
decreases your survival chances (unless you are able to overpower the 
effect).


Really? I must be out of date too then, since I agree with his explanation 
of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was "human collectives with certain taboos 
make the group as a whole more likely to persist".  The correct term of art 
for this is "group selection" and it has pretty much *NOT* been supported by 
scientific evidence and has fallen out of favor.


Matt also tends to conflate a number of ideas which should be separate which 
you seem to be doing as well.  There need to be distinctions between ethical 
systems, ethical rules, cultural variables, and evaluations of ethical 
behavior within a specific cultural context (i.e. the results of the system 
given certain rules -- which at the first-level seem to be reasonably 
standard -- with certain cultural variables as input).  Hauser's work 
identifies some of the common first-level rules and how cultural variables 
affect the results of those rules (and the derivation of secondary rules). 
It's good detailed, experiment-based stuff rather than the vague hand-waving 
that you're getting from armchair philosophers.


I fail to see how your above explanation is anything but an elaboration of 
the idea that ethics is due to group selection. The following statements 
all support it:
- "memes [rational or otherwise] when adopted by a group can enhance group 
survival"

- "Ethics is first and foremost what society wants you to do."
- "ethics turns into a matter of determining what is the behavior that is 
best for society"


I think we're stumbling over your use of the term "group selection"  and 
what you mean by "ethics is due to group selection".  Yes, the group 
"selects" the cultural variables that affect the results of the common 
ethical rules.  But "group selection" as a term of art in evolution 
generally meaning that the group itself is being selected or co-evolved --  
in this case, presumably by ethics -- which is *NOT* correct by current 
scientific understanding.  The first phrase that you quoted was intended to 
point out that both good and bad memes can positively affect survival -- so 
co-evolution doesn't work.  The second phrase that you quoted deals with the 
results of the system applying common ethical rules with cultural variables. 
The third phrase that you quoted talks about determining what the best 
cultural variables (and maybe secondary rules) are for a given set of 
circumstances -- and should have been better phrased as "Improving ethical 
evaluations turns into a matter of determining . . . "





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))

2008-08-29 Thread Mark Waser
"Group selection" (as used as the term of art in evolutionary biology) does 
not seem to be experimentally supported (and there have been a lot of recent 
experiments looking for such an effect).


It would be nice if people could let the idea drop unless there is actually 
some proof for it other than "it seems to make sense that . . . . "


- Original Message - 
From: "Eric Burton" <[EMAIL PROTECTED]>

To: 
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to 
AGI (was Re: [agi] The Necessity of Embodiment))




I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.

On 8/29/08, Mark Waser <[EMAIL PROTECTED]> wrote:

OK.  How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.


Ethics can't be explained simply by examining interactions between
individuals. It's an emergent dynamic that requires explanation at the
group level. It's a set of culture-wide rules and taboos - how did they
get there?


I wasn't explaining ethics with that statement.  I was identifying how
"evolution operates in social groups in such a way that I can derive 
ethics"

(in direct response to your question).

Ethics is a system.  The *definition of ethical behavior* for a given 
group

is "an emergent dynamic that requires explanation at the group level"
because it includes what the group believes and values -- but ethics (the
system) does not require belief history (except insofar as it affects
current belief).  History, circumstances, and understanding what a 
culture

has the rules and taboos that they have is certainly useful for deriving
more effective rules and taboos -- but it doesn't alter the underlying
system which is quite simple . . . . being perceived as helpful generally
improves your survival chances, being perceived as harmful generally
decreases your survival chances (unless you are able to overpower the
effect).

Really? I must be out of date too then, since I agree with his 
explanation


of ethics. I haven't read Hauser yet though, so maybe you're right.


The specific phrase you cited was "human collectives with certain taboos
make the group as a whole more likely to persist".  The correct term of 
art
for this is "group selection" and it has pretty much *NOT* been supported 
by

scientific evidence and has fallen out of favor.

Matt also tends to conflate a number of ideas which should be separate 
which
you seem to be doing as well.  There need to be distinctions between 
ethical

systems, ethical rules, cultural variables, and evaluations of ethical
behavior within a specific cultural context (i.e. the results of the 
system

given certain rules -- which at the first-level seem to be reasonably
standard -- with certain cultural variables as input).  Hauser's work
identifies some of the common first-level rules and how cultural 
variables
affect the results of those rules (and the derivation of secondary 
rules).
It's good detailed, experiment-based stuff rather than the vague 
hand-waving

that you're getting from armchair philosophers.

I fail to see how your above explanation is anything but an elaboration 
of


the idea that ethics is due to group selection. The following statements
all support it:
- "memes [rational or otherwise] when adopted by a group can enhance 
group


survival"
- "Ethics is first and foremost what society wants you to do."
- "ethics turns into a matter of determining what is the behavior that 
is

best for society"


I think we're stumbling over your use of the term "group selection"  and
what you mean by "ethics is due to group selection".  Yes, the group
"selects" the cultural variables that affect the results of the common
ethical rules.  But "group selection" as a term of art in evolution
generally meaning that the group itself is being selected or 
co-evolved --

in this case, presumably by ethics -- which is *NOT* correct by current
scientific understanding.  The first phrase that you quoted was intended 
to
point out that both good and bad memes can positively affect survival --  
so
co-evolution doesn't work.  The second phrase that you quoted deals with 
the
results of the system applying common ethical rules with cultural 
variables.

The third phrase that you quoted talks about determining what the best
cultural variables (and maybe secondary rules) are for a given set of
circumstances -- and should have been better phrased as "Improving 
ethical

evaluations turns into a matter of determining . . . "




---
a

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so problematical. 
You invent what *we* believe and what we intend to do.  And then you 
criticize your total fabrications (a.k.a. mental masturbation).


- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, September 10, 2008 7:18 AM
Subject: Re: [agi] Artificial humor


Matt: Humor detection obviously requires a sophisticated language model 
and knowledge of popular culture, current events, and what jokes have been 
told before. Since entertainment is a big sector of the economy, an AGI 
needs all human knowledge, not just knowledge that is work related.


In many ways, it was brave of you to pursue this idea, & the results are 
fascinating. You see, there is one central thing you need in order to 
write a joke. (Have you ever tried it? You must presumably in some 
respect). You can't just logically, formulaically analyse those jokes - 
the common ingredients of, say, the lightbulb jokes. When you write 
something - even some logical extension, say, re how many plumbers it 
takes to change a light bulb - the joke *has to strike you as funny." You 
have to laugh. It's the only way to test the joke.


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


But what makes you laugh? The common ingredient of humour is human error. 
We laugh at humans making mistakes - mistakes that were/are preventable. 
People having their head stuck snootily in the air, and so falling on 
banana skins. Mrs Malaprop mispronouncing, misconstruing big words while 
trying to look clever, and refusing to admit her ignorance. And we laugh 
because we can personally identify, because we've made those kinds of 
mistakes. They are a fundamental and continuous part of our lives.(How 
will your AGI identify?)


So are AGI-ers *heroic* figures trying to be/produce giants, or are they 
*comic* figures, like Don Quixote, who are in fact tilting at windmills, 
and refusing even to check whether those windmill arms actually belong to 
giants?


There isn't a purely logicomathematical way to decide that. It takes an 
artistic as well as a scientific mentality involving not just whole 
different parts of your brain, but different faculties and sensibilities - 
all v. real, and not reducible to logic and maths. When you deal with AGI 
problems -  like the problem of AGI itself - you need them.


(You may think this all esoteric, but in fact, you need all those same 
faculties to understand everything that is precious to you - the universe/ 
world/ society/ atoms/ genes /  machines - & even logic & maths. But more 
of that another time).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser

That's certainly news to me.


Because you haven't been paying attention (or don't have the necessary 
background or desire to recognize it).  Look at the attention that's been 
paid to the qualia and consciousness arguments (http://consc.net/online). 
Any computer with sensors and effectors is embodied.  And IBM is 
even/already touting their "Autonomic Computing" initiatives 
(http://www.research.ibm.com/autonomic/).  Computers already divide tasks 
into foreground (conscious) and background (unconscious) processes that are 
*normally* loosely-coupled with internal details encapsulated away from each 
other.  Silicon intelligences aren't going to have human internal organs 
(except, maybe, as part of a project to simulate/study humans) but they're 
certainly going to have a sense of humor -- and while they are not going to 
have the evolved *physical* side-effects, it's going to "feel" like 
something to them.


Your arguments are very short-sighted and narrow and nitpicking minor 
*current* details while missing the sweeping scope of what is not only being 
proposed but actually moving forward around you.  Stop telling us what we 
think because you're getting it *WRONG*.  Stop telling us what we're missing 
because, in most cases, we're actually paying attention to version 3 of what 
you're talking about and you just don't recognize it.  You're looking at the 
blueprints of F-14 Tomcat and arguing that the wings don't move right for a 
bird and, besides, it's too unstable for a human to fly (unassisted :-).


Read the papers in the first link and *maybe* we can have a useful 
conversation . . . .


- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, September 10, 2008 7:41 AM
Subject: Re: [agi] Artificial humor


Obviously you have no plans for endowing your computer with a self and a 
body, that has emotions and can shake with laughter. Or tears.


Actually, many of us do.  And this is why your posts are so 
problematical. You invent what *we* believe and what we intend to do. 
And then you criticize your total fabrications (a.k.a. mental 
masturbation).


You/others have plans for an *embodied* computer with the equivalent of an 
autonomic nervous systems and the relevant, attached internal organs? A 
robot? That's certainly news to me. Please expand.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser

Your response makes my point precisely . . . .

Until you truly understand *why* IBM's top engineers believes that 
"autonomic" is the correct term (and it's very clear to someone with enough 
background and knowledge that it is), you shouldn't be attempting this 
discussion.  Yes, *in CURRENT detail*, autonomic computing is different from 
the human body -- especially since the computer is much more equivalent to 
the brain with much of the rest of the body corresponding to the power grid 
and whatever sensors, effectors, and locomotive devices the computer 
controls.  Where the rest of the body differs is in the fact that a lot of 
the smarts, that lie in the computer in the artificial case, are actually 
physically embedded in the organs in the physical case.  Look at the amount 
of nervous tissue in the digestive system.  Guess why the digestive system 
is so tied into your emotions.  But the fact that the computer doesn't 
replicate the inefficient idiosyncrasies of the human body is a good thing, 
not something to emulate.  Further, when you say things like


There is no computer or robot that keeps getting physically excited or 
depressed by its computations. (But it would be a good idea).


you don't even realize that laptops (and many other computers -- not to 
mention appliances) currently do precisely what you claim that no computer 
or robot does.  When they compute that they are not being used, they start 
shutting down power usage.  Do you really want to continue claiming this?


The vast majority of this mailing list is going over your head because you 
don't recognize that while the details are different (like the autonomic 
case), the general idea and direction are dead on and way past where you're 
languishing in your freezing cave bleating because a heat pump isn't fire.


(I also suspect that you've missed most of the humor in this and the 
previous message)
((I feel like a villain in a cheesy drama -- helplessly trapped into 
monologue when I know it will do no good))


- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, September 10, 2008 10:18 AM
Subject: Re: [agi] Artificial humor


1.Autonomic [disembodied] computing" is obviously radically different from 
having a body with a sympathetically controlled engine area (upper body) 
and parasympatheticaly controlled digestive area (lower body)  which are 
continually being emotionally revved up or down in preparation for action, 
and also in continuous conflict. There is no computer or robot that keeps 
getting phsyically excited or depressed by its computations. (But it would 
be a good idea).


2.Mimicking emotions as some robots do, is similarly v. different from 
having the physical capacity to embody them, and experience them.


3.Silicon intelligences - useful distinction - don't "feel" anything - 
they don't have an organic nervous system, and of course it's still a 
fascinating question as to what extent "feeling" (the hard problem)  is 
"contained" in that system. (Again true feelings for AGI's would be a 
wonderful, perhaps essential idea).


4.To have a sense of humour, as I more or less indicated, you have to be 
able to identify with the "funny guy" making the error - and that is an 
*embodied* identification. The humour that gets the biggest, most physical 
laughs and even has you falling on the floor, usually involves the 
biggest, most physical errors - e.g. slapstick. There are no plans that I 
know of, to have computers "falling about."


5.Over and over, AI/AGI are making the same mistake  -  trying to 
copy/emulate human faculties and refusing to acknowledge that they are 
vastly more complex than AI'ers' construction.  AI'ers attempts are 
valuable and productive, but their refusal to acknowledge the complexity 
of - and to respect the billion years of evolution behind - those 
faculties, tend towards the comical. Rather like the chauffeur in High 
Anxiety who keeps struggling to carry a suitcase, "I got it.. I got it.. I 
got it. I ain't got it."


6.I would argue that it is AGI-ers who are focussed on the blueprints of 
their machine, and who repeatedly refuse to contemplate or discuss how  it 
will fly, (& I seem to recall you making a similar criticism).



Because you haven't been paying attention (or don't have the necessary 
background or desire to recognize it).  Look at the attention that's been 
paid to the qualia and consciousness arguments (http://consc.net/online). 
Any computer with sensors and effectors is embodied.  And IBM is 
even/already touting their "Autonomic Computing" initiatives 
(http://www.research.ibm.com/autonomic/).  Computers already divide tasks 
into foreground (conscious) and background (unconscious) processes that 
are *normally* loosely-coupled with internal details encapsulated away 
from each other.  Silicon intelligences aren't going to have human 
internal organs (except, maybe, as part of a project to simulate/study 
humans) but

Re: [agi] Artificial humor

2008-09-10 Thread Mark Waser
Emotional laptops, huh? Sounds like a great story idea for kids learning 
to love their laptops. Pixar needs you. ["It hasn't crashed, it's just v. 
depressed"].


Great response.  Ignore my correct point with deflecting derision directed 
at a strawman (the last refuge of the incompetent).


You seem more intent on winning an argument than learning or even honestly 
addressing the points that you yourself raised.


I'll let you go back to your fantasies of being smarter than the rest of us 
now.


- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, September 10, 2008 12:31 PM
Subject: Re: [agi] Artificial humor


There is no computer or robot that keeps getting physically excited or 
depressed by its computations. (But it would be a good idea).


you don't even realize that laptops (and many other computers -- not to 
mention appliances) currently do precisely what you claim that no 
computer or robot does.


Emotional laptops, huh? Sounds like a great story idea for kids learning 
to love their laptops. Pixar needs you. ["It hasn't crashed, it's just v. 
depressed"].





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser
Quick answer because in rush. Notice your "if" ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


What about the programs that control Stanley and the other DARPA Grand 
Challenge vehicles?



- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, September 11, 2008 11:24 AM
Subject: Re: [agi] Artificial humor



Jiri,

Quick answer because in rush. Notice your "if" ... Which programs actually 
do understand any *general* concepts of orientation? SHRDLU I will gladly 
bet, didn't...and neither do any others.


The v. word "orientation" indicates the reality that every picture has a 
point of view, and refers to an observer. And there is no physical way 
around that.


You have been seduced by an illusion - the illusion of the flat, printed 
page, existing in a timeless space. And you have accepted implicitly that 
there really is such a world - "flatland" - where geometry and geometrical 
operations take place, utterly independent of you the viewer and 
puppeteer, and the solid world of real objects to which they refer. It 
demonstrably isn't true.


Remove your eyes from the page and walk around in the world - your room, 
say. Hey, it's not flat...and neither are any of the objects in it. 
Triangular objects in the world are different from triangles on the page, 
fundamentally different.


But it  is so difficult to shed yourself of this illusion. You  need to 
look at the history of culture and realise that the imposition on the 
world/ environment of first geometrical figures, and then, more than a 
thousand years later,  the fixed point of view and projective geometry, 
were - and remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They 
don't exist, Jiri. They're just one of many possible frameworks (albeit v 
useful)  to impose on the physical world. Nomadic tribes couldn't conceive 
of squares and enclosed spaces. Future generations will invent new 
frameworks.


Simple example of how persuasive the illusion is. I didn't understand 
until yesterday what the "introduction of a fixed point of view" really 
meant - it was that word "fixed". What was the big deal? I couldn't 
understand. Isn't it a fact of life, almost?


Then it clicked. Your natural POV is "mobile" - your head/eyes keep 
moving - even when reading. It is an artificial invention to posit a fixed 
POV. And the geometric POV is doubly artificial, because it is "one-eyed", 
no?, not stereoscopic. But once you get used to reading pages/screens you 
come to assume that an artificial fixed POV is *natural*.


[Stan Franklin was interested in a speculative paper suggesting that the 
evolutionary brain's "stabilisation of vision", (a  software triumph 
because organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium,  and 
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at the page. Your idea of AGI is just one big page [or screen] that 
apparently exists in splendid self-contained isolation.


It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you 
want to cling to "excessive optimism" and a simple POV or do you want to 
try and grasp the admittedly complicated & more sophisticated reality?

.

Jiri: If you talk to a program about changing 3D scene and the program 
then

correctly answers questions about [basic] spatial relationships
between the objects then I would say it understands 3D. Of course the
program needs to work with a queriable 3D representation but it
doesn't need a "body". I mean it doesn't need to be a real-world
robot, it doesn't need to associate "self" with any particular 3D
object (real-world or simulated) and it doesn't need to be self-aware.
It just needs to be the 3D-scene-aware and the scene may contain just
a few basic 3D objects (e.g. the Shrdlu stuff).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-11 Thread Mark Waser

Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


They are allowed to work by GPS but there are parts of the course where they 
are required to work without it.


Shouldn't you already have basic knowledge like this before proclaiming 
things like "neither do any others" when talking about being able to 
"understand any *general* concepts of orientation"



- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, September 11, 2008 1:31 PM
Subject: Re: [agi] Artificial humor




Jiri,

Clearly a limited 3d functionality is possible for a program such as you 
describe - as for SHRDLU. But what we're surely concerned with here is 
generality. So fine start with a restricted world of say different kinds 
of kid's blocks and similar. But then the program must be able to tell 
what is "in" what or outside, what is behind/over etc. - and also what is 
moving towards or away from an object, ( it surely should be a "mobile" 
program) - and be able to move objects. My assumption is that even a 
relatively simple such general program wouldn't work - (I obviously 
haven't thought about this in any detail). It would be interesting to have 
the details about how SHRDLU broke down.


Also - re BillK's useful intro. of DARPA - do those vehicles work by GPS?


Mike,

Imagine a simple 3D scene with 2 different-size spheres. A simple
program allows you to change positions of the spheres and it can
answer question "Is the smaller sphere inside the bigger sphere?"
[Yes|Partly|No]. I can write such program in no time. Sure, it's
extremely simple, but it deals with 3D, it demonstrates certain level
of 3D understanding without embodyment and there is no need to pass
the orientation parameter to the query function. Note that the
orientation is just a parameter. It Doesn't represent a "body" and it
can be added. Of course understanding all the real-world 3D concepts
would take a lot more code and data than when playing with 3D
toy-worlds, but in principle, it's possible to understand 3D without
having a body.

Jiri

On Thu, Sep 11, 2008 at 11:24 AM, Mike Tintner <[EMAIL PROTECTED]> 
wrote:

Jiri,

Quick answer because in rush. Notice your "if" ... Which programs 
actually
do understand any *general* concepts of orientation? SHRDLU I will 
gladly

bet, didn't...and neither do any others.

The v. word "orientation" indicates the reality that every picture has a
point of view, and refers to an observer. And there is no physical way
around that.

You have been seduced by an illusion - the illusion of the flat, printed
page, existing in a timeless space. And you have accepted implicitly 
that
there really is such a world - "flatland" - where geometry and 
geometrical
operations take place, utterly independent of you the viewer and 
puppeteer,

and the solid world of real objects to which they refer. It demonstrably
isn't true.

Remove your eyes from the page and walk around in the world - your room,
say. Hey, it's not flat...and neither are any of the objects in it.
Triangular objects in the world are different from triangles on the 
page,

fundamentally different.

But it  is so difficult to shed yourself of this illusion. You  need to 
look

at the history of culture and realise that the imposition on the world/
environment of first geometrical figures, and then, more than a thousand
years later,  the fixed point of view and projective geometry,  were - 
and

remain - a SUPREME TRIUMPH OF THE HUMAN IMAGINATION.  They don't exist,
Jiri. They're just one of many possible frameworks (albeit v useful)  to
impose on the physical world. Nomadic tribes couldn't conceive of 
squares

and enclosed spaces. Future generations will invent new frameworks.

Simple example of how persuasive the illusion is. I didn't understand 
until
yesterday what the "introduction of a fixed point of view" really 
meant - it
was that word "fixed". What was the big deal? I couldn't understand. 
Isn't

it a fact of life, almost?

Then it clicked. Your natural POV is "mobile" - your head/eyes keep 
moving -
even when reading. It is an artificial invention to posit a fixed POV. 
And
the geometric POV is doubly artificial, because it is "one-eyed", no?, 
not

stereoscopic. But once you get used to reading pages/screens you come to
assume that an artificial fixed POV is *natural*.

[Stan Franklin was interested in a speculative paper suggesting that the
evolutionary brain's "stabilisation of vision", (a  software triumph 
because
organisms are so mobile) may have led to the development of 
consciousness).


You have to understand the difference between 1) the page, or medium, 
and
2) the real world it depicts,  and 3) you, the observer, reading/looking 
at
the page. Your idea of AGI is just one big page [or screen] that 
apparently

exists in splendid self-contained isolation.

It's an illusion, and it just doesn't *work* vis-a-vis programs.  Do you
want to cling to "excessive optimism" and a simple POV or 

Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Domain effectiveness (a.k.a. intelligence) is predicated upon having an 
effective internal model of that domain.

Language production is the extraction and packaging of applicable parts of the 
internal model for transmission to others.
Conversely, language understanding is for the reception (and integration) of 
model portions developed by others (i.e. learning from a teacher).

The better your internal models, the more effective/intelligent you are.

BUT!  This also holds true for language!  Concrete unadorned statements convey 
a lot less information than statements loaded with adjectives, adverbs, or even 
more markedly analogies (or innuendos or . . . ).
A child cannot pick up the same amount of information from a sentence that they 
think that they understand (and do understand to some degree) that an adult can.
Language is a knowledge domain like any other and high intelligences can use it 
far more effectively than lower intelligences.

** Or, in other words, I am disagreeing with the statement that "the process 
itself needs not much intelligence".

Saying that the understanding of language itself is simple is like saying that 
chess is simple because you understand the rules of the game.
Godel's Incompleteness Theorem can be used to show that there is no upper bound 
on the complexity of language and the intelligence necessary to pack and 
extract meaning/knowledge into/from language.

Language is *NOT* just a top-level communications protocol because it is not 
fully-specified and because it is tremendously context-dependent (not to 
mention entirely Godellian).  These two reasons are why it *IS* inextricably 
tied into intelligence.

I *might* agree that the concrete language of lower primates and young children 
is separate from intelligence, but there is far more going on in adult language 
than a simple communications protocol.

E-mail programs are simply point-to-point repeaters of language (NOT meaning!)  
Intelligences generally don't exactly repeat language but *try* to repeat 
meaning.  The game of telephone is a tremendous example of why language *IS* 
tied to intelligence (or look at the results of translating simple phrases into 
another language and back -- "The drink is strong but the meat is rotten").  
Translating language to and from meaning (i.e. your domain model) is the 
essence of intelligence.

How simple is the understanding of the above?  How much are you having to fight 
to relate it to your internal model (assuming that it's even compatible :-)?

I don't believe that intelligence is inherent upon language EXCEPT that 
language is necessary to convey knowledge/meaning (in order to build 
intelligence in a reasonable timeframe) and that language is influenced by and 
influences intelligence since it is basically the core of the critical 
meta-domains of teaching, learning, discovery, and alteration of your internal 
model (the effectiveness of which *IS* intelligence).  Future AGI and humans 
will undoubtedly not only have a much richer language but also a much richer 
repertoire of second-order (and higher) features expressed via language.

** Or, in other words, I am strongly disagreeing that "intelligence is 
separated from language understanding".  I believe that language understanding 
is the necessary tool that intelligence is built with since it is what puts the 
*contents* of intelligence (i.e. the domain model) into intelligence .  Trying 
to build an intelligence without language understanding is like trying to build 
it with just machine language or by using only observable data points rather 
than trying to build those things into more complex entities like third-, 
fourth-, and fifth-generation programming languages instead of machine language 
and/or knowledge instead of just data points.

BTW -- Please note, however, that the above does not imply that I believe that 
NLU is the place to start in developing AGI.  Quite the contrary -- NLU rests 
upon such a large domain model that I believe that it is counter-productive to 
start there.  I believe that we need to star with limited domains and learn 
about language, internal models, and grounding without brittleness in tractable 
domains before attempting to extend that knowledge to larger domains.

  - Original Message - 
  From: David Hart 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:30 AM
  Subject: Re: AW: [agi] Re: Defining AGI



  An excellent post, thanks!

  IMO, it raises the bar for discussion of language and AGI, and should be 
carefully considered by the authors of future posts on the topic of language 
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

  -dave


  On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meanin

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.


This is what I disagree entirely with.  If nothing else, humans are 
constantly building and updating their mental model of what other people 
believe and how they communicate it.  Only in routine, pre-negotiated 
conversations can language be entirely devoid of learning.  Unless a 
conversation is entirely concrete and based upon something like shared 
physical experiences, it can't be any other way.  You're only paying 
attention to the absolutely simplest things that language does (i.e. the tip 
of the iceberg).



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words "c-a-t" or "m-i-n-d" or "U-S"  or "f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s"

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of "patterns". What patterns, do you think, form the concept of
"mind" that are engaged in thinking about sentence 2? Do you think that
concepts like "mind" or "the US" might involve something much more complex
still? "Models"? Or is that still way too simple? "Spaces"?

Equally, of course, we can say that each *sentence* above is not just a
"verbal composition" but a "conceptual composition" - and the question 
then

is what form does such a composition take? Do sentences form, say, a
"pattern of patterns",  or something like a "picture"? Or a "blending of
spaces" ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. "a million 
dollars" -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start "cashing in,"  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtu

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


Read Pinker's The Stuff of Thought.  Actually, a lot of these details *are* 
visible from a linguistic point of view.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words "c-a-t" or "m-i-n-d" or "U-S"  or "f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s"

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of "patterns". What patterns, do you think, form the concept of
"mind" that are engaged in thinking about sentence 2? Do you think that
concepts like "mind" or "the US" might involve something much more complex
still? "Models"? Or is that still way too simple? "Spaces"?

Equally, of course, we can say that each *sentence* above is not just a
"verbal composition" but a "conceptual composition" - and the question 
then

is what form does such a composition take? Do sentences form, say, a
"pattern of patterns",  or something like a "picture"? Or a "blending of
spaces" ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. "a million 
dollars" -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start "cashing in,"  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the AGI shall understand natural language it would be necessary that 
it
makes similar experiences as humans make in the real world. But this 
would

need a very very sophisticated and rich virtual world. At least, there
have
to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of 
this

approach and the costs would be positive for first AGI.






--

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished.


What if the matching process is not finished?

This is overly simplistic for several reasons since you're apparently 
assuming that the matching process is crisp, unambiguous, and irreversible 
(and ask Stephen Reed how well that works for TexAI).


It *must* be remembered that "the internal model" for natural language 
includes such critically entwined and constantly changing information as 
what this particular conversation is about, what the speaker knows, and what 
the speakers motivations are.  The meaning of sentences can change 
tremendously based upon the currently held beliefs about these questions. 
Suddenly realizing that the speaker is being sarcastic generally reverses 
the meaning of statements.  Suddenly realizing that the speaker is using an 
analogy can open up tremendous vistas for interpretation and analysis.  Look 
at all the problems that people have parsing sentences.



Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.


The reason why you can separate the process of communication with the 
process of manipulating data in a computer is because *data* is crisp and 
unambiguous.  It is concrete and completely specified as I suggested in my 
initial e-mail.  The model is entirely known and the communication process 
is entirely specified.  None of these things are true of unstructured 
knowledge.


Language understanding emphatically does not meet these requirements so your 
analogy doesn't hold.



You can see it differently but then everything is only a discussion about
definitions.


No, and claiming that everything is just a discussion about definitions is a 
strawman.  Your analogies are not accurate and your model is incomplete. 
You are focusing only on the tip of the iceberg (concrete language as spoken 
by a two-year-old) and missing the essence of NLP.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 1:42 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.
You can see it differently but then everything is only a discussion about
definitions.

- Matthias




Mark Waser [mailto:[EMAIL PROTECTED] wrote

Gesendet: Sonntag, 19. Oktober 2008 19:00
An: agi@v2.listbox.com
Betreff: Re: [agi] Words vs Concepts [ex Defining AGI]

There is no creation of new patterns and there is no intelligent 
algorithm
which manipulates patterns. It is just translating, sending, receiving 
and

retranslating.


This is what I disagree entirely with.  If nothing else, humans are
constantly building and updating their mental model of what other people
believe and how they communicate it.  Only in routine, pre-negotiated
conversations can language be entirely devoid of learning.  Unless a
conversation is entirely concrete and based upon something like shared
physical experiences, it can't be any other way.  You're only paying
attention to the absolutely simplest things that language does (i.e. the 
tip


of the iceberg).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser
If there are some details of the internal structure of patterns visible 
then

this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.


True, but visible patterns offer clues for interpretation and analysis.  The 
more that is visible and clear, the less that is ambiguous and needs to be 
guessed at.  This is where your analogy splitting computer communications 
and data updates is accurate because the internal structures have been 
communicated and are shared to the nth degree.



Since in many communicating technical systems there are so much details
which are not transferred I would bet that this is also the case in 
humans.


Details that don't need to be transferred are those which are either known 
by or unnecessary to the recipient.  The former is a guess (unless the 
details were transmitted previously) and the latter is an assumption based 
upon partial knowledge of the recipient.  In a perfect, infinite world, 
details could and should always be transferred.  In the real world, time and 
computational constraints means that trade-offs need to occur.  This is 
where the essence of intelligence comes into play -- determining which of 
the trade-offs to take to get optimal perfomance (a.k.a. domain competence)



As long as we have no proof this remains an open question.


What remains an open question?  Obviously there are details which can be 
teased out by behavior and details that can't be easily teased out because 
we have insufficient data to do so.  This is like any other scientific 
examination of any other complex phenomenon.



An AGI which may
have internal features for its patterns would have less restrictions and 
is

thus far easier to build.


Sorry, but I can't interpret this.  An AGI without internal features and 
regularities is an oxymoron and completely nonsensical.  What are you trying 
to convey here?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
g. Meaning is a mapping from a 
linguistic string to patterns.

   

  Email programs are not just point to point repeaters.

  They receive data in a certain communication protocol. They translate these 
data into an internal representation and store the data. And they can translate 
their internal data into a linguistic representation to send the data to 
another email client. This process  of communication is conceptually the same 
as we can observe it with humans.

  The word "meaning" was bad chosen from me. But brains do not transfer meaning 
as well. They also just transfer  data. Meaning is a mapping. 

   

  You *believe* that language cannot be separated from intelligence. I don't 
and I have described a model which has a strict separation. We both have no 
proof.

   

  - Matthias

   

  >>> 

  Mark Waser [mailto:[EMAIL PROTECTED]  wrote



   

   

  BUT!  This also holds true for language!  Concrete unadorned statements 
convey a lot less information than statements loaded with adjectives, adverbs, 
or even more markedly analogies (or innuendos or . . . ).

  A child cannot pick up the same amount of information from a sentence that 
they think that they understand (and do understand to some degree) that an 
adult can.

  Language is a knowledge domain like any other and high intelligences can use 
it far more effectively than lower intelligences.

   

  ** Or, in other words, I am disagreeing with the statement that "the process 
itself needs not much intelligence".

   

  Saying that the understanding of language itself is simple is like saying 
that chess is simple because you understand the rules of the game.

  Godel's Incompleteness Theorem can be used to show that there is no upper 
bound on the complexity of language and the intelligence necessary to pack and 
extract meaning/knowledge into/from language.

   

  Language is *NOT* just a top-level communications protocol because it is not 
fully-specified and because it is tremendously context-dependent (not to 
mention entirely Godellian).  These two reasons are why it *IS* inextricably 
tied into intelligence.

   

  I *might* agree that the concrete language of lower primates and young 
children is separate from intelligence, but there is far more going on in adult 
language than a simple communications protocol.

   

  E-mail programs are simply point-to-point repeaters of language (NOT 
meaning!)  Intelligences generally don't exactly repeat language but *try* to 
repeat meaning.  The game of telephone is a tremendous example of why language 
*IS* tied to intelligence (or look at the results of translating simple phrases 
into another language and back -- "The drink is strong but the meat is 
rotten").  Translating language to and from meaning (i.e. your domain model) is 
the essence of intelligence.

   

  How simple is the understanding of the above?  How much are you having to 
fight to relate it to your internal model (assuming that it's even compatible 
:-)?

   

  I don't believe that intelligence is inherent upon language EXCEPT that 
language is necessary to convey knowledge/meaning (in order to build 
intelligence in a reasonable timeframe) and that language is influenced by and 
influences intelligence since it is basically the core of the critical 
meta-domains of teaching, learning, discovery, and alteration of your internal 
model (the effectiveness of which *IS* intelligence).  Future AGI and humans 
will undoubtedly not only have a much richer language but also a much richer 
repertoire of second-order (and higher) features expressed via language.

   

  ** Or, in other words, I am strongly disagreeing that "intelligence is 
separated from language understanding".  I believe that language understanding 
is the necessary tool that intelligence is built with since it is what puts the 
*contents* of intelligence (i.e. the domain model) into intelligence .  Trying 
to build an intelligence without language understanding is like trying to build 
it with just machine language or by using only observable data points rather 
than trying to build those things into more complex entities like third-, 
fourth-, and fifth-generation programming languages instead of machine language 
and/or knowledge instead of just data points.

   

  BTW -- Please note, however, that the above does not imply that I believe 
that NLU is the place to start in developing AGI.  Quite the contrary -- NLU 
rests upon such a large domain model that I believe that it is 
counter-productive to start there.  I believe that we need to star with limited 
domains and learn about language, internal models, and grounding without 
brittleness in tractable domains before attempting to extend that knowledge to 
larger domains.

   

- Original Message - 

From: David Hart 

To: agi@v2.listbox.com 

Sent: Sunday, October 19, 2008 5:30 AM

Subject: Re: AW: [agi] R

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Many or most people on this list believe that learning language is an 
AGI-complete task.  What this means is that the skills necessary for 
learning a language are necessary and sufficient for learning any other 
task.  It is not that language understanding gives general intelligence 
capabilities, but that the pre-requisites for language understanding are 
general intelligence (or, that language understanding is isomorphic to 
general intelligence in the same fashion that all NP-complete problems are 
isomorphic).  Thus, the argument actually is that a system that "can do 
nothing else than language understanding" is an oxymoron.


*Any* human who can understand language beyond a certain point (say, that of 
a slightly sub-average human IQ) can easily be taught to be a good scientist 
if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.



Deaf people speak in sign language, which is only different from spoken
language in superficial ways. This does not tell us much about language
that we didn't already know.
But it is a proof that *natural* language understanding is not necessary 
for

human-level intelligence.


This is a bit of disingenuous side-track that I feel that I must address. 
When people say "natural language", the important features are extensibility 
and ambiguity.  If you can handle one extensible and ambiguous language, you 
should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).


If you can speak two languages then you can make an easy test: Try to 
think
in the foreign language. It works. If language would be inherently 
involved

in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the 
other

module for language translation. This is a big hint that language and
thoughts do not have much in common.


One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are 
claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it 
quite clear that language is *not* simply translation.


Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of 
it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his 
numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.




- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 2:52 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI




Terren wrote:



Isn't the *learning* of language the entire point? If you don't have an

answer for >how an AI learns language, you haven't solved anything.  The
understanding of >language only seems simple from the point of view of a
fluent speaker. Fluency >however should not be confused with a lack of
intellectual effort - rather, it's a >state in which the effort involved 
is

automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Einstein had to express his (non-linguistic) internal insights in natural

language >and in mathematical language.  In both modalities he had to use
his intelligence to >make the translation from his mental

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser
You have given no reason why the separation of the process of 
communication

with the
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.


How do you communicate something for which you have no established 
communications protocol?  If you can answer that, you have solved the 
natural language problem.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 3:10 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



Mark Waser wrote


What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently
assuming that the matching process is crisp, unambiguous, and irreversible
(and ask Stephen Reed how well that works for TexAI).


I do not assume this. Why should I?


It *must* be remembered that "the internal model" for natural language
includes such critically entwined and constantly changing information as
what this particular conversation is about, what the speaker knows, and

what

the speakers motivations are.  The meaning of sentences can change
tremendously based upon the currently held beliefs about these questions.
Suddenly realizing that the speaker is being sarcastic generally reverses
the meaning of statements.  Suddenly realizing that the speaker is using 
an



analogy can open up tremendous vistas for interpretation and analysis.

Look

at all the problems that people have parsing sentences.


If I suddenly realize that the speaker is sarcastic than I change my
mappings from linguistic entities to pattern entities. Where is the 
problem?




The reason why you can separate the process of communication with the
process of manipulating data in a computer is because *data* is crisp and
unambiguous.  It is concrete and completely specified as I suggested in my
initial e-mail.  The model is entirely known and the communication process
is entirely specified.  None of these things are true of unstructured
knowledge.


You have given no reason why the separation of the process of 
communication

with the
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.




Language understanding emphatically does not meet these requirements so

your

analogy doesn't hold.


There are no special requirements.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.


Absolutely.  We are not aware of most of our assumptions that are based in 
our common heritage, culture, and embodiment.  But an external observer 
could easily notice them and tease out an awful lot of information about us 
by doing so.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 3:18 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.

-Matthias

Mark Waser wrote


Details that don't need to be transferred are those which are either known
by or unnecessary to the recipient.  The former is a guess (unless the
details were transmitted previously) and the latter is an assumption based
upon partial knowledge of the recipient.  In a perfect, infinite world,
details could and should always be transferred.  In the real world, time

and

computational constraints means that trade-offs need to occur.  This is
where the essence of intelligence comes into play -- determining which of
the trade-offs to take to get optimal perfomance (a.k.a. domain 
competence)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Mark Waser

The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.


Yes!  But the formal languages need to be efficiently extensible as well 
(and ambiguity plays a large part in extensibility which then leads to . . . 
.  :-)


If the language must be learned then things are completely different and 
you
are right that the interaction with the environment is necessary to learn 
L.


How do you go from a formal language to a competent description of a messy, 
ambiguous, data-deficient world?  *That* is the natural language question.


What happens if I say that language extensibility is exactly analogous to 
learning which is exactly analogous to internal model improvement?



But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.


I disagree with a complete distinction between D and L.  L is a very small 
fraction of D translated for transmission.  However, instead of arguing that 
there must be a strict separation between language model and D, I would 
argue that the more similar the two could be (i.e. the less translation from 
D to L) the better.  Analyzing L in that case could tell you more about D 
than you might think (which is what Pinker and Hauser argue).  It's like 
looking at data to determine an underlying cause for a phenomenon.  Even 
noticing what does and does not vary (and what covaries) tells you a lot 
about the underlying cause (D).



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 3:50 PM
Subject: AW: [agi] Re: Meaning, communication and understanding



The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.

If the language must be learned then things are completely different and 
you
are right that the interaction with the environment is necessary to learn 
L.


But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.

- Matthias




Vladimir Nesov wrote

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any "domain", becoming able to
generate appropriate behavior in corresponding contexts.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
> Manipulating of patterns needs reading and writing operations. Data 
> structures will be changed. Translation needs just reading operations to the 
> patterns of the internal model.



So translation is a pattern manipulation where the result isn't stored?



> I disagree that AGI must have some process for learning language. If we 
> concentrate just on the domain of mathematics we could give AGI all the rules 
> for a sufficient language to express its results and to understand our 
> questions.



The domain of mathematics is complete and unambiguous.  A mathematics AI is not 
a GI in my book.  It won't generalize to the real world until it handles 
incompleteness and ambiguity (which is my objection to your main analogy).



(Note:  I'm not saying that it might not be a good first step . . . . but I 
don't believe that it is on the shortest path to GI).



> New definitions makes communication more comfortable but they are not 
> necessary.

 

Wrong.  Methane is not a new definition, it is a new label.  New definitions 
that combine lots of raw data into much more manipulable knowledge are 
necessary exactly as much as a third-, fourth-, or fifth- generation language 
is necessary instead of machine language.



> I don't know the telephone game. The details are essential. It is not 
> essential where the data comes from and where it ends. Just the process of 
> translating internal data into a certain language and vice versa is important.



Start with a circle of people.  Tell the first person a reasonable length 
phrase, have them tell the next, and so on.  The end result is fascinating and 
very similar to what happens when an incompetent pair of translators attempt to 
translate from one language to another and back again.



> It is clear that an AGI needs an interface for human beings. But the question 
> in this discussion is whether the language interface is a key point in AGI or 
> not. In my opinion it is no key point. It is just a communication protocol. 
> The real intelligence has nothing to do with language understanding. 
> Therefore we should use a simple formal hard coded language for first AGI.



The communication protocol needs to be extensible to handle output after 
learning or transition into a new domain.  How do you ground new concepts?  
More importantly, it needs to be extensible to support teaching the AGI.  As I 
keep saying, how are you going to make your communication protocol extensible?  
Real GENERAL intelligence has EVERYTHING to do with extensibility.



> I don't see any problems with my model and I do not see any flaws which I 
> don't have answered.

> I haven't seen any point where my analogy comes short.



I keep pointing out that your model separating communication and database 
updating depends upon a fully specified model and does not tolerate ambiguity 
(i.e. it lacks extensibility and doesn't handle ambiguity).  You continue not 
to answer these points.



Unless you can handle valid objections by showing why they aren't valid, your 
model is disproven by counter-example.



  - Original Message - 
  From: Dr. Matthias Heger 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 4:53 PM
  Subject: AW: AW: [agi] Re: Defining AGI


  Mark Waser wrote:

   

  >How is translating patterns into language different from manipulating 
patterns? 

  > It seems to me that they are *exactly* the same thing.  How do you believe 
that they differ?

   

  Manipulating of patterns needs reading and writing operations. Data 
structures will be changed. Translation needs just reading operations to the 
patterns of the internal model.

   

   

  >Do you really believe that if A is easier than B then that makes A easy? 

  > How about if A is leaping a tall building in a single bound and B is 
jumping to the moon?

   

  The word *easy*  is not exactly definable.

   

   

  > Do you believe that language is fully specified?  That we can program 
English into an AGI by hand?

   

  No. That's the reason why I would not use human language for the first AGI.

   

  >Yes, I imagine that an AGI must have some process for learning language 
because language is necessary for 

  >learning knowledge and knowledge is necessary for intelligence.  

  >What part of that do you disagree with?  Please be specific.

   

  I disagree that AGI must have some process for learning language. If we 
concentrate just on the domain of mathematics we could give AGI all the rules 
for a sufficient language to express its results and to understand our 
questions.

   

   

   

   >>>

  >And this is where we are not communicating.  Since language is not fully 
specified, then the participants in 

  >many conversations are *constantly* creating and learning language as a part 
of the process of 

  >communication. 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone willing 
to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, and 
examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something "by the numbers" until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed to 
taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI






>>>
*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases don't
get in the way.
<<

  This is obviously spoken by someone who has never been a professional teacher 
;-p

  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has 
a

stronger  relation to the pattern for angry than it is the case for the
pattern of tree.


So, are the relationships between the various patterns in your translation 
module or in your cognitive module?


I would argue that they are in your cognitive module.  If you disagree, then 
I'll just agree to disagree because in order for them to be in your 
translation module then you'll have to be constantly updating your 
translation module which then contradicts what you said previously about the 
translation module being static.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 5:38 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has 
a

stronger  relation to the pattern for angry than it is the case for the
pattern of tree.

You don't have to manipulate any patterns and can do the translation.

- Matthias

Marc Walser wrote:

How do you communicate something for which you have no established
communications protocol?  If you can answer that, you have solved the
natural language problem.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser

Interesting how you always only address half my points . . .

I keep hammering extensibility and you focus on ambiguity which is merely 
the result of extensibility.  You refuse to address extensibility.  Maybe 
because it really is the secret sauce of intelligence and the one thing that 
you can't handle?


And after a long explanation, I get comments like "> It is still just 
translation" with no further explanation and "visual thought" nonsense 
worthy of Mike Tintner.


So, I give up.  I can't/won't debate someone who won't follow scientific 
methods of inquiry.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 5:21 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI



Marc Walser wrote:



*Any* human who can understand language beyond a certain point (say, that 
of


a slightly sub-average human IQ) can easily be taught to be a good 
scientist


if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases 
don't

get in the way.
<<<


This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.





This is a bit of disingenuous side-track that I feel that I must address.
When people say "natural language", the important features are 
extensibility


and ambiguity.  If you can handle one extensible and ambiguous language, 
you


should have the capabilities to handle all of them.  It's yet another
definition of GI-complete.  Just look at it as yet another example of
dealing competently with ambiguous and incomplete data (which is, at root,
all that intelligence is).
<<<

You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.




One thought module, two translation modules -- except that all the
translation modules really are is label appliers and grammar re-arrangers.
The heavy lifting is all in the thought module.  The problem is that you 
are


claiming that language lies entirely in the translation modules while I'm
arguing that a large percentage of it is in the thought module.  The fact
that the translation module has to go to the thought module for
disambiguation and interpretation (and numerous other things) should make 
it


quite clear that language is *not* simply translation.
<<<

It is still just translation.




Further, if you read Pinker's book, you will find that languages have a 
lot

more in common than you would expect if language truly were independent of
and separate from thought (as you are claiming).  Language is built on top
of the thinking/cognitive architecture (not beside it and not independent 
of


it) and could not exist without it.  That is why language is AGI-complete.
Language also gives an excellent window into many of the features of that
cognitive architecture and determining what is necessary for language also
determine what is in that cognitive architecture.  Another excellent 
window
is how humans perform moral judgments (try reading Marc Hauser -- either 
his


numerous scientific papers or the excellent Moral Minds).  Or, yet 
another,

is examining the structure of human biases.
<<<

There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: 
There

is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual 
thoughts

and routes the results to its own visual input regions.

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Actually, I should have drawn a distinction . . . . there is a major difference 
between performing discovery as a scientist and evaluating data as a scientist. 
 I was referring to the latter (which is similar to understanding Einstein) as 
opposed to the former (which is being Einstein).  You clearly are referring to 
the creative act of discovery (Programming is also a discovery operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
& programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality seems to 
require a whole bunch of implicit, hard-to-verbalize knowledge that stupid 
people just don't seem to be capable of learning.  A stupid person can possibly 
be trained to be a good lab assistant, in some areas of science but not others 
(it depends on how flaky and how complex the lab technology involved is in that 
area).  But, being a scientist involves a lot of judgment, a lot of heuristic, 
uncertain reasoning drawing on a wide variety of knowledge.

  Could a stupid person learn to be a good scientist given, say, a thousand 
years of training?  Maybe.  But I doubt it, because by the time they had moved 
on to learning the second half of what they need to know, they would have 
already forgotten the first half ;-p

  You work in software engineering -- do you think a stupid person could be 
trained to be a really good programmer?  Again, I very much doubt it ... though 
they could be (and increasingly are ;-p) trained to do routine programming 
tasks.  

  Inevitably, in either of these cases, the person will encounter some 
situation not directly covered in their training, and will need to 
intelligently analogize to their experience, and will fail at this because they 
are not very intelligent...

  -- Ben G


  On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone 
willing to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, 
and examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something "by the numbers" until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed 
to taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI






>>>
*Any* human who can understand language beyond a certain point (say, 
that of

a slightly sub-average human IQ) can easily be taught to be a good 
scientist

if they are willing to play along.  Science is a rote process that can 
be
learned and executed by anyone -- as long as their beliefs and biases 
don't
get in the way.
<<

  This is obviously spoken by someone who has never been a professional 
teacher ;-p

  ben g


--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
>> Whether a stupid person can do good scientific evaluation "if taught the 
>> rules" is a badly-formed question, because no one knows what the rules are.  
>>  They are learned via experience just as much as by explicit teaching

Wow!  I'm sorry but that is a very scary, incorrect opinion.  There's a really 
good book called "The Game of Science" by McCain and Segal that clearly 
explains all of the rules.  I'll get you a copy.

I understand that most "scientists" aren't trained properly -- but that is no 
reason to continue the problem by claiming that they can't be trained properly.

You make my point with your explanation of your example of biology referees.  
And the Feynman example, if it is the story that I've heard before, was 
actually an example of good science in action because the outlier was 
eventually overruled AFTER ENOUGH GOOD DATA WAS COLLECTED to prove that the 
outlier was truly an outlier and not just a mere inconvenience to someone's 
theory.  Feynman's exceptional intelligence allowed him to discover a 
possibility that might have been correct if the point was an outlier, but good 
scientific evaluation relies on data, data, and more data.  Using that story as 
an example shows that you don't understand how to properly run a scientific 
evaluative process.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 6:07 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Whether a stupid person can do good scientific evaluation "if taught the 
rules" is a badly-formed question, because no one knows what the rules are.   
They are learned via experience just as much as by explicit teaching

  Furthermore, as anyone who has submitted a lot of science papers to journals 
knows, even smart scientists can be horrendously bad at scientific evaluation.  
I've had some really good bioscience papers rejected from journals, by 
presumably intelligent referees, for extremely bad reasons (and these papers 
were eventually published in good journals).

  Evaluating research is not much easier than doing it.  When is someone's 
supposed test of statistical validity really the right test?  Too many biology 
referees just look for the magic number of p<.05 rather than understanding what 
test actually underlies that number, because they don't know the math or don't 
know how to connect the math to the experiment in a contextually appropriate 
way.

  As another example: When should a data point be considered an outlier 
(meaning: probably due to equipment error or some other quirk) rather than a 
genuine part of the data?  Tricky.  I recall Feynman noting that he was held 
back in making a breakthrough discovery for some time, because of an outlier on 
someone else's published data table, which turned out to be spurious but had 
been accepted as valid by the community.  In this case, Feyman's exceptional 
intelligence allowed him to carry out scientific evaluation more effectively 
than other, intelligent but less-so-than-him, had done...

  -- Ben G


  On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Actually, I should have drawn a distinction . . . . there is a major 
difference between performing discovery as a scientist and evaluating data as a 
scientist.  I was referring to the latter (which is similar to understanding 
Einstein) as opposed to the former (which is being Einstein).  You clearly are 
referring to the creative act of discovery (Programming is also a discovery 
operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
& programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality 
seems to require a whole bunch of implicit, hard-to-verbalize knowledge that 
stupid people just don't seem to be capable of learning.  A stupid person can 
possibly be trained to be a good lab assistant, in some areas of science but 
not others (it depends on how flaky and how complex the lab technology involved 
is in that area).  But, being a scientist involves a lot of judgment, a lot of 
heuristic, uncertain reasoning draw

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
>> It is really not true that there is a set of simple rules adequate to tell 
>> people how to evaluate scientific results effectively.

Get the book and then speak from a position of knowledge by telling me 
something that you believe it is missing.  When I cite a specific example that 
you can go and verify or disprove, it is not an opinion but a valid data point 
(and your perception of my vehemence and/or confidence and your personal 
reaction to it are totally irrelevant).  The fact that you can make a statement 
like this from a position of total ignorance when I cite a specific example is 
a clear example of not following basic scientific principles.  You can be 
insulted all you like but that is not what a good scientist would do on a good 
day -- it is simply lazy and bad science.

>> As often occurs, there may be rules that tell you how to handle 80% of cases 
>> (or whatever), but then the remainder of the cases are harder and require 
>> actual judgment.

Is it that the rules don't have 100% coverage or is that it isn't always clear 
how to appropriately apply the rules and that is where the questions come in?  
There is a huge difference between the two cases -- and your statement "no one 
knows what the rules are" argues for the former not the latter.  I'd be more 
than willing to accept the latter -- but the former is an embarrassment.  Do 
you really mean to contend the former?

>> It is possible I inaccurately remembered an anecdote from Feynman's book, 
>> but that's irrelevant to my point.

No, you accurately remembered the anecdote.  As I recall, Feynman was 
expressing frustration at the slowness of the process -- particularly because 
no one would consider his hypothesis enough to perform the experiments 
necessary to determine whether the point was an outlier or not.  Not performing 
the experiment was an unfortunate choice of trade-offs (since I'm sure that 
they were doing something else that they deemed more likely to produce 
worthwhile results) but accepting his theory without first proving that the 
outlier was indeed an outlier (regardless of his "intelligence") would have 
been far worse and directly contrary to the scientific method.

>>>> Using that story as an example shows that you don't understand how to 
>>>> properly run a scientific evaluative process.
>> Wow, that is quite an insult.  So you're calling me an incompetent in my 
>> profession now.  

It depends.  Are you going to continue promoting something as inexcusable as 
saying that theory should trump data (because of the source of the theory)?  I 
was quite clear that I was criticizing a very specific action.  Are you going 
to continue to defend that improper action?  

And why don't we keep this on the level of scientific debate rather than 
arguing insults and vehemence and confidence?  That's not particularly good 
science either.  

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 6:31 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Sorry Mark, but I'm not going to accept your opinion on this just because you 
express it with vehemence and confidence.

  I didn't argue much previously when you told me I didn't understand 
engineering ... because, although I've worked with a lot of engineers, I 
haven't been one.

  But, I grew up around scientists, I've trained scientists, and I am currently 
(among other things) working as a scientist.

  It is really not true that there is a set of simple rules adequate to tell 
people how to evaluate scientific results effectively.  As often occurs, there 
may be rules that tell you how to handle 80% of cases (or whatever), but then 
the remainder of the cases are harder and require actual judgment.

  This is, by the way, the case with essentially every complex human process 
that people have sought to cover via "expert rules."  The rules cover many 
cases ... but as one seeks to extend them to cover all relevant cases, one 
winds up adding more and more and more specialized rules...

  It is possible I inaccurately remembered an anecdote from Feynman's book, but 
that's irrelevant to my point.

  ***
  Using that story as an example shows that you don't understand how to 
properly run a scientific evaluative process.
  ***

  Wow, that is quite an insult.  So you're calling me an incompetent in my 
profession now.  

I don't have particularly "thin skin", but I have to say that I'm getting 
really tired of being attacked and insulted on this email list. 

  -- Ben G



  On Sun, Oct 19, 2008 at 6:18 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Whether a stupid person can do good scientific evaluation "if taught the 
rules" is a badly-formed

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing 
that

there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation 
from

D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).
<<<

This is just an assumption of you. No facts. My opinion remains: D and L 
are

separated.



Geez.  What is it with this list?  Read Pinker.  Tons of facts.  Take them 
into account and then form an opinion.


Any algorithm in your computer is written in a formal well defined 
language.

If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed 
by

formal languages.


Once we figure out how to program the process of automatically extending 
formal languages -- yes, absolutely.  That's the path to AGI.



If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.


with Gödel?  That's impressive.


Furthermore,
language extension would be a nice feature but it is not necessary.


Cool.  And this is where we agree to disagree (and it does seem to be at the 
root of all the other arguments).  If I believed this, I would agree with 
most of your other stuff.  I just don't see how you're going to stretch any 
non-extensible language to *effectively* cover an infinite universe.  Gödel 
argues against it.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 6:53 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]





Absolutely.  We are not aware of most of our assumptions that are based in
our common heritage, culture, and embodiment.  But an external observer
could easily notice them and tease out an awful lot of information about 
us

by doing so.


You do not understand what I mean.
There will be lot of implementation details (e.g. temporary variables )
within the patterns which will never be send by linguistic messages.




I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing 
that

there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation 
from

D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).
<<<

This is just an assumption of you. No facts. My opinion remains: D and L 
are

separated.




How do you go from a formal language to a competent description of a 
messy,

ambiguous, data-deficient world?  *That* is the natural language question.
<<<

Any algorithm in your computer is written in a formal well defined 
language.

If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed 
by

formal languages.





What happens if I say that language extensibility is exactly analogous to
learning which is exactly analogous to internal model improvement?
<<<

What happens? I disagree.




So translation is a pattern manipulation where the result isn't stored?
<<<

The result isn't stored in D



The domain of mathematics is complete and unambiguous.  A mathematics AI 
is
not a GI in my book.  It won't generalize to the real world until it 
handles

incompleteness and ambiguity (which is my objection to your main analogy).
<<<

If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.




The communication protocol needs to be extensible to handle output after
learning or transition into a new domain.  How do you ground new concepts?
More importantly, it needs to be extensible to support teaching the AGI. 
As

I keep saying, how are you going to make your communication protocol
extensible?  Real GENERAL intelligence has EVERYTHING to do with
extensibility.
<<<

For mathematics you just need a few axioms. There are an infinite number 
of

expressions which can be written with a final set of symbols and a finite
formal language.

But extensibility is no crucial point in this discussion at all. You can
have extensibility with a strict separation of D and L. For first AGI with
mathematics I would hardcode an algorithm which manages an open list of
axioms

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mark Waser
There is a wide area between moderation and complete laissez-faire.

Also, as list owner, people tend to pay attention to what you say/request and 
also what you do.

If you regularly point to references and ask others to do the same, they are 
likely to follow.  If you were to gently chastise people for saying that there 
are no facts when references were provided, people might get the hint.  
Instead, you generally feed the trolls and "humorously" insult the people who 
are trying to keep it on a scientific basis.  That's a pretty clear message all 
by itself.

You don't need to spend more time but, as a serious role model for many of the 
people on the list, you do need to pay attention to the effects of what you say 
and do.  I can't help but go back to my perceived summary of the most recent 
issue -- "Ben Goertzel says that there is no true defined method to the 
scientific method (and Mark Waser is clueless for thinking that there is)."


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 20, 2008 6:53 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





It would also be nice if this mailing list could be operate on a bit more 
of a scientific basis.  I get really tired of pointing to specific references 
and then being told that I have no facts or that it was solely my opinion.



  This really has to do with the culture of the community on the list, rather 
than the "operation" of the list per se, I'd say.

  I have also often been frustrated by the lack of inclination of some list 
members to read the relevant literature.  Admittedly, there is a lot of it to 
read.  But on the other hand, it's not reasonable to expect folks who *have* 
read a certain subset of the literature, to summarize that subset in emails for 
individuals who haven't taken the time.  Creating such summaries carefully 
takes a lot of effort.

  I agree that if more careful attention were paid to the known science related 
to AGI ... and to the long history of prior discussions on the issues discussed 
here ... this list would be a lot more useful.

  But, this is not a structured discussion setting -- it's an Internet 
discussion group, and even if I had the inclination to moderate more carefully 
so as to try to encourage a more carefully scientific mode of discussion, I 
wouldn't have the time...

  ben g




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
If MW would be scientific then he would not have asked Ben to prove that 
MWs hypothesis is wrong.


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking 
for a counter-example unscientific?



The person who has to prove something is the person who creates the 
hypothesis.


Ah.  Like the theory of evolution is conclusively proved?  The scientific 
method is about predictive power not proof.  Try reading the reference that 
I gave Ben.  (And if you've got something to prove, maybe the scientific 
method isn't so good for you.  :-)



And MW has given not a tiny argument for his hypothesis that a natural 
language understanding system can easily be a scientist.


First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is "easy".


Second, my hypothesis is more correctly stated that the pre-requisites for a 
natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


Third, while I haven't given a tiny argument, I have given a reasonably 
short logical chain which I'll attempt to rephrase yet again.


Science is all about modeling the world and predicting future data.
The scientific method simply boils down to making a theory (of how to change 
or enhance your world model) and seeing if it is supported (not proved!) or 
disproved by future data.
Ben's and my disagreement initially came down to whether a scientist was an 
Einstein (his view) or merely capable of competently reviewing data to see 
if it supports, disproves, or isn't relevant to the predictive power of a 
theory (my view).
Later, he argued that most humans aren't even competent to review data and 
can't be made competent.
I agreed with his assessment that many scientists don't competently review 
data (inappropriate over-reliance on the heuristic p < 0.5 without 
understanding what it truly means) but disagreed as to whether the average 
human could be *taught*
Ben's argument was that the scientific method couldn't be codified well 
enough to be taught.  My argument was that the method was codified 
sufficiently but that the application of the method was clearly context 
dependent and could be unboundedly complex.


But this is actually a distraction from some more important arguments . . . 
.
The $1,000,000 question is "If a human can't be taught something, is that 
human a general intelligence?"
The $5,000,000 question is "If a human can't competently follow a recipe in 
a cookbook, do they have natural language understanding?"


Fundamentally, this either comes down to a disagreement about what a general 
intelligence is and/or what understanding and meaning are.
Currently, I'm using the definition that a general intelligence is one that 
can achieve competence in any domain in a reasonable length of time.

To achieve competence in a domain, you have to "understand" that domain
My definition of understanding is that you have a mental model of that 
domain that has predictive power in that domain and which you can update as 
you learn about that domain.

(You could argue with this definition if you like)
Or, in other words, you have to be a competent scientist in that domain --  
or else, you don't truly "understand" that domain


So, for simplicity, why don't we just say
   scientist = understanding

Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which 
equals scientist ;-).





- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveye

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Oh, and I *have* to laugh . . . .


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


In the cited wikipedia entry, the phrase "Scientific method is not a recipe: 
it requires intelligence, imagination, and creativity" is immediately 
followed by just such a recipe for the scientific method


A linearized, pragmatic scheme of the four points above is sometimes offered 
as a guideline for proceeding:[25]

 1.. Define the question
 2.. Gather information and resources (observe)
 3.. Form hypothesis
 4.. Perform experiment and collect data
 5.. Analyze data
 6.. Interpret data and draw conclusions that serve as a starting point for 
new hypothesis

 7.. Publish results
 8.. Retest (frequently done by other scientists)



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
>> Yes, but each of those steps is very vague, and cannot be boiled down to a 
>> series of precise instructions sufficient for a stupid person to 
>> consistently carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they only 
general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and emphasize) 
between a discoverer and a learner.  The cognitive skills/intelligence 
necessary to design questions, hypotheses, experiments, etc. are far in excess 
the cognitive skills/intelligence necessary to evaluate/validate those things.  
My argument was meant to be that a general intelligence needs to be a 
learner-type rather than a discoverer-type although the discoverer type is 
clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence?  
How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

>> Also, those steps are heuristic and do not cover all cases.  For instance 
>> step 4 requires experimentation, yet there are sciences such as cosmology 
>> and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than physical 
experiments but it's still all about predictive power.  What is that next 
star/dinosaur going to look like?  What is it *never* going to look like (or 
else we need to expand or correct our theory)?  Is there anything that we can 
guess that we haven't tested/seen yet that we can verify?  What else is science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
F.  Return to step A with additional leverage

If you were forced to codify the "hard core" of the scientific method, how 
would you do it?

>> As you asked for references I will give you two:

Thank you for setting a good example by including references but the contrast 
between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 
(you didn't really forget that my undergraduate degree was a dual major of 
Biochemistry and Philosophy of Science, did you? :-).

My view is basically that of Lakatos to the extent that I would challenge you 
to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism ) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).




  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 10:41 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





  On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Oh, and I *have* to laugh . . . .



  Hence the wiki entry on scientific method:
  "Scientific method is not a recipe: it requires intelligence, 
>imagination,

and creativity"

  http://en.wikipedia.org/wiki/Scientific_method
  This is basic stuff.



In the cited wikipedia entry, the phrase "Scientific method is not a 
recipe: it requires intelligence, imagination, and creativity" is immediately 
followed by just such a recipe for the scientific method

A linearized, pragmatic scheme of the four points above is sometimes 
offered as a guideline for proceeding:[25]

  Yes, but each of those steps is very vague, and cannot be boiled down to a 
series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Marc Walser wrote


Try to get the name right.  It's just common competence and courtesy.

Before you ask for counter examples you should *first* give some 
arguments which supports your hypothesis. This was my point.


And I believe that I did.  And I note that you didn't even address the fact 
that I did so again in the e-mail you are quoting.  You seem to want to 
address trivia rather than the meat of the argument.  What don't you address 
the core instead of throwing up a smokescreen?



Regarding your example with Darwin:


What example with Darwin?

First, I'd appreciate it if you'd drop the strawman.  You are the only 
one who keeps insisting that anything is "easy".
 Is this a scientific discussion from you? No. You use rhetoric and 
nothing else.


And baseless statements like "You use rhetoric and nothing else" are a 
scientific discussion.  Again with the smokescreen.



I don't say that anything is easy.


Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.


--





Clearly you DO say that language understanding is easy.








This is the first time you speak about pre-requisites.


Direct quote cut and paste from *my* e-mail . . . . .
----
- Original Message - 
From: "Mark Waser" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 4:01 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I don't think that learning of language is the entire point. If I have
only
learned language I still cannot create anything. A human who can
understand
language is by far still no good scientist. Intelligence means the 
ability

to solve problems. Which problems can a system solve if it can nothing
else
than language understanding?


Many or most people on this list believe that learning language is an
AGI-complete task.  What this means is that the skills necessary for
learning a language are necessary and sufficient for learning any other
task.  It is not that language understanding gives general intelligence
capabilities, but that the pre-requisites for language understanding are
general intelligence (or, that language understanding is isomorphic to
general intelligence in the same fashion that all NP-complete problems are
isomorphic).  Thus, the argument actually is that a system that "can do
nothing else than language understanding" is an oxymoron.


-




Clearly I DO talk about the pre-requisites for language understanding.






Dude.  Seriously.

First you deny your own statements and then claim that I didn't previously 
mention something that it is easily provable that I did (at the top of an 
e-mail).  Check the archives.  It's all there in bits and bytes.


Then you end with a funky pseudo-definition that "Understanding does not 
imply the ability to create something new or to apply knowledge."   What 
*does* understanding mean if you can't apply it?  What value does it have?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
>> But, by the time she overcame every other issue in the way of really 
>> understanding science, her natural lifespan would have long been overspent...

You know, this is a *really* interesting point.  Effectively what you're saying 
(I believe) is that the difficulty isn't in learning but in UNLEARNING 
incorrect things that actively prevent you (via conflict) from learning correct 
things.  Is this a fair interpretation?

It's also particularly interesting when you compare it to information theory 
where the sole cost is in erasing a bit, not in setting it.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 2:56 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Hmm...

  I think that non-retarded humans are fully general intelligences in the 
following weak sense: for any fixed t and l, for any human there are some 
numbers M and T so that if the human is given amount M of external memory (e.g. 
notebooks to write on), that human could be taught to emulate AIXItl

  [see 
http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8&s=books&qid=1224614995&sr=1-1
 , or the relevant papers on Marcus Hutter's website]

  where each single step of AIXItl might take up to T seconds.

  This is a kind of generality that I think no animals but humans have.  So, in 
that sense, we seem to be the first evolved general intelligences.

  But, that said, there are limits to what any one of us can learn in a fixed 
finite amount of time.   If you fix T realistically then our intelligence 
decreases dramatically.

  And for the time-scales relevant in human life, it may not be possible to 
teach some people to do science adequately.

  I am thinking for instance of a 40 yr old student I taught at the University 
of Nevada way back when (normally I taught advanced math, but in summers I 
sometimes taught remedial stuff for extra $$).  She had taken elementary 
algebra 7 times before ... and had had extensive tutoring outside of class ... 
but I still was unable to convince her of the incorrectness of the following 
reasoning: "The variable a always stands for 1.  The variable b always stands 
for 2. ... The variable z always stands for 26."   She was not retarded.  She 
seemed to have a mental block against algebra.  She could discuss politics and 
other topics with seeming intelligence.  Eventually I'm sure she could have 
been taught to overcome this block.  But, by the time she overcame every other 
issue in the way of really understanding science, her natural lifespan would 
have long been overspent...

  -- Ben G



  On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Yes, but each of those steps is very vague, and cannot be boiled down to 
a series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they 
only general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and 
emphasize) between a discoverer and a learner.  The cognitive 
skills/intelligence necessary to design questions, hypotheses, experiments, 
etc. are far in excess the cognitive skills/intelligence necessary to 
evaluate/validate those things.  My argument was meant to be that a general 
intelligence needs to be a learner-type rather than a discoverer-type although 
the discoverer type is clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence? 
 How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

>> Also, those steps are heuristic and do not cover all cases.  For 
instance step 4 requires experimentation, yet there are sciences such as 
cosmology and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than 
physical experiments but it's still all about predictive power.  What is that 
next star/dinosaur going to look like?  What is it *never* going to look like 
(or else we need to expand or correct our theory)?  Is there anything that we 
can guess that we haven't tested/seen yet that we can verify?  What else is 
science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Wow!  Way too much good stuff to respond to in one e-mail.  I'll try to respond 
to more in a later e-mail but . . . . (and I also want to get your reaction to 
a few things first :-)

>> However, I still don't think that a below-average-IQ human can pragmatically 
>> (i.e., within the scope of the normal human lifetime) be taught to 
>> effectively carry out statistical evaluation of theories based on data, 
>> given the realities of how theories are formulated and how data is obtained 
>> and presented, at the present time...

Hmmm.  After some thought, I have to start by saying that it looks like you're 
equating science with statistics and I've got all sorts of negative reactions 
to that.

First -- Sure.  I certainly have to agree for a below-average-IQ human and 
could even be easily convinced for an average IQ human if they had to do it all 
themselves.  And then, statistical packages quickly turn into a two-edged sword 
where people blindly use heuristics without understanding them (p < .05 
anyone?).

A more important point, though, is that humans natively do *NOT* use statistics 
but innately use very biased, non-statistical methods that *arguably* function 
better than statistics in real world data environments.   That alone would 
convince me that I certainly don't want to say that science = statistics.

>> I am not entirely happy with Lakatos's approach either.  I find it 
>> descriptively accurate yet normatively inadequate.

Hmmm.  (again)  To me that seems to be an interesting way of rephrasing our 
previous disagreement except that you're now agreeing with me.  (Gotta love it 
:-)

You find Lakatos's approach descriptively accurate?  Fine, that's the 
scientific method.  

You find it normatively inadequate?  Well, duh (but meaning no offense :-) . . 
. . you can't codify the application of the scientific method to all cases.  I 
easily agreed to that before.

What were we disagreeing on again?


>> My own take is that science normatively **should** be based on a Bayesian 
>> approach to evaluating theories based on data

That always leads me personally to the question "Why do humans operate on the 
biases that they do rather than Bayesian statistics?"  MY *guess*  is that 
evolution COULD have implemented Bayesian methods but that the current methods 
are more efficient/effective under real world conditions (i.e. because of the 
real-world realities of feature extraction under dirty and incomplete or 
contradictory data and the fact that the Bayesian approach really does need to 
operate in an incredibly data-rich world where the features have already been 
extracted and ambiguities, other than occurrence percentages, are basically 
resolved).

**And adding different research programmes and/or priors always seems like such 
a kludge . . . . . 






  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 4:15 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,


>> As you asked for references I will give you two:

Thank you for setting a good example by including references but the 
contrast between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).

  I read that book but didn't like it as much ... but you're right, it may be 
an easier place for folks to start...
   
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 

  All good stuff indeed.
   
My view is basically that of Lakatos to the extent that I would challenge 
you to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism ) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).

  Feyerabend appeals to my sense of humor ... I liked the guy.  I had some 
correspondence with him when I was 18.  I wrote him a letter outlining some of 
my ideas on philosophy of mind and asking his advice on where I should go to 
grad school to study philosophy.  He replied telling me that if I wanted to be 
a real philosopher I should **not** study philosophy academically nor become a 
philosophy professor, but should study science or arts and then pursue 
philosophy independently.  We chatted back and forth a 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

AI!   :-)

This is what I was trying to avoid.   :-)

My objection starts with "How is a Bayes net going to do feature 
extraction?"


A Bayes net may be part of a final solution but as you even indicate, it's 
only going to be part . . . .


- Original Message - 
From: "Eric Burton" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 21, 2008 4:51 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Incorrect things are wrapped up with correct things in peoples' minds



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem.


Um.  No.

I'm thinking that in order to integrate a new idea into your world model, 
you first have to resolve all the conflicts that it has with the existing 
model.  That could be incredibly expensive.


(And intelligence is emphatically not linear)

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.


I'm sure that Ben was saying that for doing discovery . . . . and I agree.

For evaluation, I'm not sure that we've come to closure on what either of us 
think . . . .   :-)




- Original Message - 
From: "BillK" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 21, 2008 5:50 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:


Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...




Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
> You may not like "Therefore, we cannot understand the math needed to define
> our own intelligence.", but I'm rather convinced that it's correct. 

Do you mean to say that there are parts that we can't understand or that the 
totality is too large to fit and that it can't be cleanly and completely 
decomposed into pieces (i.e. it's a complex system ;-).

Personally, I believe that the foundational pieces necessary to 
construct/boot-strap an intelligence are eminently understandable (if not even 
fairly simple) but that the resulting intelligence that a) organically grows 
from it's interaction with an environment that it can only extract partial, 
dirty, and ambiguous data and b) does not have the time, computational 
capability, or data to make itself even remotely consistent past a certain 
level IS large and complex enough that you will never truly understand it 
(which is where I have sympathy with Richard Loosemore's arguments -- but don't 
buy that the interaction of the pieces is necessarily so complex that we can't 
make broad predictions that are accurate enough to be able to "engineer" 
intelligence).

If you say "parts we can't understand", how do you reconcile that with your 
statements of yesterday about what general intelligences can learn?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
>> However, the point I took issue with was your claim that a stupid person 
>> could be taught to effectively do science ... or (your later modification) 
>> evaluation of scientific results.
>> At the time I originally took exception to your claim, I had not read the 
>> earlier portion of the thread, and I still haven't; so I still do not know 
>> why you made the claim in the first place.

In brief --> You've agreed that even a stupid person is a general intelligence. 
 By "do science", I (originally and still) meant the amalgamation that is 
probably best expressed as a combination of critical thinking and/or the 
scientific method.  My point was a combination of both a) to be a general 
intelligence, you really must have a domain model and the rudiments of critical 
thinking/scientific methodology in order to be able to competently/effectively 
update it and b) if you're a general intelligence, even if you don't need it, 
you should be able to be taught the rudiments of critical thinking/scientific 
methodology.  

Are those points that you would agree with?  (A serious question -- and, in 
particular, if you don't agree, I'd be very interested in why since I'm trying 
to arrive at a reasonable set of distinctions that define a general 
intelligence).

In typical list fashion, rather than asking what I meant (or, in your case, 
even having the courtesy to read what came before -- so that you might have 
*some* chance of understanding what I was trying to get at -- in case my 
immediate/proximate phrasing was as awkward as I'll freely admit that it was 
;-), it effectively turned into an argument past each other when your immediate 
concept/interpretation of *science = advanced statistical interpretation* hit 
the blindingly obvious shoals of it's not easy teaching stupid people 
complicated things (I mean -- seriously, dude --do you *really* think that I'm 
going to be that far off base?  And, if not, why disrupt the conversation so 
badly by coming in in such a fashion?)..

(And I have to say --> As list owner, it would be helpful if you would set a 
good example of reading threads and trying to understand what people meant 
rather than immediately coming in and flinging insults and accusations of 
ignorance e.g.  "This is obviously spoken by someone who has never . . . . ").

So . . . . can you agree with the claim as phrased above?  (i.e. What were we 
disagreeing on again? ;-)

Oh, and the original point was part of a discussion about the necessary and 
sufficient pre-requisites for general intelligence so it made sense to 
(awkwardly :-) say that a domain model and the rudiments of critical 
thinking/scientific methodology are a (major but not complete) part of that.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 8:51 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark W wrote:


What were we disagreeing on again?


  This conversation has drifted into interesting issues in the philosophy of 
science, most of which you and I seem to substantially agree on.

  However, the point I took issue with was your claim that a stupid person 
could be taught to effectively do science ... or (your later modification) 
evaluation of scientific results.

  At the time I originally took exception to your claim, I had not read the 
earlier portion of the thread, and I still haven't; so I still do not know why 
you made the claim in the first place.

  ben




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
> It doesn't, because **I see no evidence that humans can
> understand the semantics of formal system in X in any sense that
> a digital computer program cannot**

I just argued that humans can't understand the totality of any formal system X 
due to Godel's Incompleteness Theorem but the rest of this is worth addressing 
. . . . 

> Whatever this mysterious "understanding" is that you believe you
> possess, **it cannot be communicated to me in language or
> mathematics**.  Because any series of symbols you give me, could
> equally well be produced by some being without this mysterious
> "understanding".

Excellent!  Except for the fact that the probability of the being *continuing* 
to emit those symbols without this "mysterious understanding" rapidly 
approaches zero.  So I'm going to argue that understanding *can* effectively be 
communicated/determined.  Arguing otherwise is effectively arguing for 
vanishingly small probabilities in infinities (and why I hate most arguments 
involving AIXI as proving *anything* except absolute limits c.f. Matt Mahoney 
and compression = intelligence).

I'm going to continue arguing that understanding exactly equates to having a 
competent domain model and being able to communicate about it (i.e. that there 
is no mystery about understanding -- other than not understanding it ;-).

> Can you describe any possible finite set of finite-precision observations
> that could provide evidence in favor of the hypothesis that you possess
> this posited "understanding", and against the hypothesis that you are
> something equivalent to a digital computer?

> I think you cannot.

But I would argue that this is because a digital computer can have 
understanding (and must and will in order to be an AGI).

>> So, your belief in this posited "understanding" has nothing to do with 
>> science, it's
>> basically a kind of religious faith, it seems to me... '-)

If you're assuming that humans have it and computers can't, then I have to 
strenuously agree.  There is no data (that I am aware of) to support this 
conclusion so it's pure faith, not science.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I don't want to diss the personal value of logically inconsistent thoughts.  
>> But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and then 
not have scientific or engineering value.

I can sort of understand science if you're interpreting science looking for the 
final correct/optimal value but engineering generally goes for either "good 
enough" or "the best of the currently known available options" and anything 
that really/truly has personal value would seem to have engineering value.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser

I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.


Or maybe another way to rephrase it is combine it with another thread . . . 
.


Any individual piece of science is understandable/teachable to (or my 
original point -- verifiable or able to be validated by) any general 
intelligence but the totally of science combined with the world is far too 
large to . . . . (which is effectively Ben's point) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

(1) We humans understand the semantics of formal system X.


No.  This is the root of your problem.  For example, replace "formal system 
X" with "XML".  Saying that "We humans understand the semantics of XML" 
certainly doesn't work and why I would argue that natural language 
understanding is AGI-complete (i.e. by the time all the RDF, OWL, and other 
ontology work is completed -- you'll have an AGI).  Any formal system can 
always be extended *within it's defined syntax* to have any meaning.  That 
is the essence of Godel's Incompleteness Theorem.


It's also sort of the basis for my argument with Dr. Matthias Heger. 
Semantics are never finished except when your model of the world is finished 
(including all possibilities and infinitudes) so language understanding 
can't be simple and complete.


Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 
and figure out how to use our world model/knowledge to translate English to 
this disambiguated subset -- and then we can build from there.  (or maybe 
this makes Heger's argument for him . . . .  ;-)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Well, if you are a computable system, and if by "think" you mean "represent 
>> accurately and internally" then you can only think that odd thought via 
>> being logically inconsistent... ;-)

True -- but why are we assuming *internally*?  Drop that assumption as Charles 
clearly did and there is no problem.

It's like infrastructure . . . . I don't have to know all the details of 
something to use it under normal circumstances though I frequently need to know 
the details is I'm doing something odd with it or looking for extreme 
performance and I definitely need to know the details if I'm 
diagnosing/fixing/debugging it -- but I can always learn them as I go . . . . 


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 11:26 PM
  Subject: Re: [agi] constructivist issues



  Well, if you are a computable system, and if by "think" you mean "represent 
accurately and internally" then you can only think that odd thought via being 
logically inconsistent... ;-)




  On Tue, Oct 21, 2008 at 11:23 PM, charles griffiths <[EMAIL PROTECTED]> wrote:

  I disagree, and believe that I can think X: "This is a thought (T) 
that is way too complex for me to ever have."

  Obviously, I can't think T and then think X, but I might represent T 
as a combination of myself plus a notebook or some other external media. Even 
if I only observe part of T at once, I might appreciate that it is one thought 
and believe (perhaps in error) that I could never think it.

  I might even observe T in action, if T is the result of billions of 
measurements, comparisons and calculations in a computer program.

  Isn't it just like thinking "This is an image that is way too 
detailed for me to ever see"?

  Charles Griffiths

  --- On Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM



I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can 
certainly be quite useful.  I'd rather use differential calculus to do 
calculations, than do everything using finite differences.

It's just that, from a science perspective, these mathematical 
infinities have to be considered finite formal constructs ... they don't existP 
except in this way ...

I'm not going to claim the pragmatist perspective is the only 
subjectively meaningful one.  But so far as I can tell it's the only useful one 
for science and engineering...

To take a totally different angle, consider the thought X = "This 
is a thought that is way too complex for me to ever have"

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it 
symbolically and formally.  I can reason about it and empathize with it by 
analogy to "A thought that is way too complex for my three-year-old past-self 
to have ever had" , and so forth.

But it seems I can't ever really think X, except by being logically 
inconsistent within that same thought ... this is the Godel limitation applied 
to my own mind...

I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

-- Ben G




On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]> 
wrote:

  Ben,

  How accurate would it be to describe you as a finitist or
  ultrafinitist? I ask because your view about restricting 
quantifiers
  seems to reject even the infinities normally allowed by
  constructivists.

  --Abram



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?&;

  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be 
first overcome "  - Dr Samuel Johnson





  agi | Archives  | Modify Your Subscription  
 




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I disagree, and believe that I can think X: "This is a thought (T) that is 
>> way too complex for me to ever have."
>> Obviously, I can't think T and then think X, but I might represent T as a 
>> combination of myself plus a notebook or some other external media. Even if 
>> I only observe part of T at once, I might appreciate that it is one thought 
>> and believe (perhaps in error) that I could never think it.
>> I might even observe T in action, if T is the result of billions of 
>> measurements, comparisons and calculations in a computer program.
>> Isn't it just like thinking "This is an image that is way too detailed for 
>> me to ever see"?

Excellent!  This is precisely how I feel about intelligence . . . .  (and why 
we *can* understand it even if we can't hold the totality of it -- or fully 
predict it -- sort of like the weather :-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> You have not convinced me that you can do anything a computer can't do.
>> And, using language or math, you never will -- because any finite set of 
>> symbols
>> you can utter, could also be uttered by some computational system.
>> -- Ben G

Can we pin this somewhere?

(Maybe on Penrose?  ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> IMHO that is an almost hopeless approach, ambiguity is too integral to 
>> English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big words 
and never use small words and/or you use a specific phrase as a "word".  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic English) 
actually *favored* the small tremendously over-used/ambiguous words (because 
you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with Lojban 
>> instead  Learning Lojban is a pain but far less pain than you'll have 
>> trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can come 
up with an unambiguous English word or very short phrase for each Lojban word.  
If you can do it, my approach will work and will have the advantage that the 
output can be read by anyone (i.e. it's the equivalent of me having done it in 
Lojban and then added a Lojban -> English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English->subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

  If you want to take this sort of approach, you'd better start with Lojban 
instead  Learning Lojban is a pain but far less pain than you'll have 
trying to make a disambiguated subset of English.

  ben g 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(joke)

What?  You don't love me any more?  


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues



  (joke)


  On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:




On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

  >> I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

  I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

Come by the house, we'll drop some acid together and you'll be convinced ;-)
 





  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Come by the house, we'll drop some acid together and you'll be convinced ;-)

Been there, done that.  Just because some logically inconsistent thoughts have 
no value doesn't mean that all logically inconsistent thoughts have no value.

Not to mention the fact that hallucinogens, if not the subsequently warped 
thoughts, do have the serious value of raising your mental Boltzmann 
temperature.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues





  On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

  Come by the house, we'll drop some acid together and you'll be convinced ;-)
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

What I meant was, it seems like humans are
"logically complete" in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.


I like the phrase "logically complete".

The way that I like to think about it is that we have the necessary seed of 
whatever intelligence/competence is that can be logically extended to cover 
all circumstances.


We may not have the personal time or resources to do so but given infinite 
time and resources there is no block on the path from what we have to 
getting there.


Note, however, that it is my understanding that a number of people on this 
list do not agree with this statement (feel free to chime in with you 
reasons why folks).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:20 PM
Subject: Re: [agi] constructivist issues



Too many responses for me to comment on everything! So, sorry to those
I don't address...

Ben,

When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think "meaning" is
more basic. I mentioned already what my current definition of meaning
is: a statement is meaningful if it is associated with a computable
rule of deduction that it can use to operate on other (meaningful)
statements. This is in contrast to positivist-style definitions of
meaning, that would instead require a computable test of truth and/or
falsehood.

So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a statement if we not only understand it, but proceed to
apply its deductive procedure.

There is of course some basic level of meaningful statements, such as
sensory observations, so that this is a working recursive definition
of meaning and truth.

By this definition of meaning, any statement in the arithmetical
hierarchy is meaningful (because each statement can be represented by
computable consequences on other statements in the arithmetical
hierarchy). I think some hyperarithmetical truths are captured as
well. I am more doubtful about it capturing anything beyond the first
level of the analytic hierarchy, and general set-theoretic discourse
seems far beyond its reach. Regardless, the definition of meaning
makes a very large number of uncomputable truths nonetheless
meaningful.

Russel,

I think both Ben and I would approximately agree with everything you
said, but that doesn't change our disagreeing with each other :).

Mark,

Good call... I shouldn't be talking like I think it is terrifically
unlikely that some more-intelligent alien species would find humans
mathematically crude. What I meant was, it seems like humans are
"logically complete" in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I think this would be a relatively pain-free way to communicate with an AI 
>> that lacks the common sense to carry out disambiguation and reference 
>> resolution reliably.   Also, the log of communication would provide a nice 
>> training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently.  If 
I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with Simplified 
English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled in a 
similar way, e.g. by defining an ontology of preposition meanings like with_1, 
with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing resources 
into a preposition-meaning ontology like this a while back ... the so-called 
PrepositionWordNet ... or as it eventually came to be called the LARDict or 
LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, so 
the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 urinated 
in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an AI 
that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big 
words and never use small words and/or you use a specific phrase as a "word".  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can 
come up with an unambiguous English word or very short phrase for each Lojban 
word.  If you can do it, my approach will work and will have the advantage that 
the output can be read by anyone (i.e. it's the equivalent of me having done it 
in Lojban and then added a Lojban -> English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English->subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to 
need to start with a formal language that is a

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Douglas Hofstadter's newest book I Am A Strange Loop (currently available 
from Amazon for $7.99 - 
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) has 
an excellent chapter showing Godel in syntax and semantics.  I highly 
recommend it.


The upshot is that while it is easily possible to define a complete formal 
system of syntax, that formal system can always be used to convey something 
(some semantics) that is (are) outside/beyond the system -- OR, to 
paraphrase -- meaning is always incomplete because it can always be added to 
even inside a formal system of syntax.


This is why I contend that language translation ends up being AGI-complete 
(although bounded subsets clearly don't need to be -- the question is 
whether you get a usable/useful subset more easily with or without first 
creating a seed AGI).


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

It looks like all this "disambiguation" by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building . 
. .

.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in 
and

ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy 
and

word/label and then modify it to fit.  That's why you then later need the
much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a 
template

or mold that you can use as scaffolding while you develop NLU.  Children
start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the 
resources

of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Well, I am confident my approach with subscripts to handle disambiguation 
>> and reference resolution would work, in conjunction with the existing 
>> link-parser/RelEx framework...
>> If anyone wants to implement it, it seems like "just" some hacking with the 
>> open-source Java RelEx code...

Like what I called a "semantically-driven English->subset translator"?.  

Oh, I'm pretty confidant that it will work as well . . . . after the LaBrea tar 
pit of implementations . . . . (exactly how little semantic-related coding do 
you think will be necessary? ;-)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 1:06 PM
  Subject: Re: [OpenCog] Re: [agi] constructivist issues



  Well, I am confident my approach with subscripts to handle disambiguation and 
reference resolution would work, in conjunction with the existing 
link-parser/RelEx framework...

  If anyone wants to implement it, it seems like "just" some hacking with the 
open-source Java RelEx code...

  ben g


  On Wed, Oct 22, 2008 at 12:59 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently. 
 If I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with 
Simplified English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled 
in a similar way, e.g. by defining an ontology of preposition meanings like 
with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, 
so the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too integral 
to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use 
big words and never use small words and/or you use a specific phrase as a 
"word".  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you 
can come up with an unambiguous Engl

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
A couple of distinctions that I think would be really helpful for this 
discussion . . . . 

There is a profound difference between learning to play chess legally and 
learning to play chess well.

There is an equally profound difference between discovering how to play chess 
well and being taught to play chess well.

Personally, I think that a minimal AGI should be able to be taught to play 
chess reasonably well (i.e. about how well an average human would play after 
being taught the rules and playing a reasonable number of games with 
hints/pointers/tutoring provided) at about the same rate as a human when given 
the same assistance as that human.

Given that grandmasters don't learn solely from chess-only examples without 
help or without analogies and strategies from other domains, I don't see why an 
AGI should be forced to operate under those constraints.  Being taught is much 
faster and easier than discovering on your own.  Translating an analogy or 
transferring a strategy from another domain is much faster than discovering 
something new or developing something from scratch.  Why are we crippling our 
AGI in the name of simplicity?

(And Go is obviously the same)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
>> Still, using chess as a test case may not be useless; a system that produces 
>> a convincing story about concept formation in the chess domain (that is, 
>> that invents concepts for pinning, pawn chains, speculative sacrifices in 
>> exchange for piece mobility, zugzwang, and so on without an identifiable 
>> bias toward these things) would at least be interesting to those interested 
>> in AGI.

I believe that generic concept formation and explanation is an AGI-complete 
problem.  Would you agree or disagree?


>> Mathematics, though, is interesting in other ways.  I don't believe that 
>> much of mathematics involves the logical transformations performed in proof 
>> steps.  A system that invents new fields of mathematics, new terms, new 
>> mathematical "ideas" -- that is truly interesting.  Inference control is 
>> boring, but inventing mathematical induction, complex numbers, or ring 
>> theory -- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation (just in a 
slightly different domain)?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
>> No system can make those kinds of inventions without sophisticated inference 
>> control.  Concept creation of course is required also, though.

I'd argue that this is bad phrasing.  

Sure, effective control is necessary to create the concepts that you need to 
fulfill your goals (as opposed to far too many random unuseful concepts) . . . 
. 

But it isn't "Concept creation of course is required also", it really is 
"Effective control is necessary for effective concept creation which is 
necessary for effective goal fulfillment."

And assuming that control must be sophisticated and necessarily entirely in the 
realm of inference are just assumptions . . . .   :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 3:54 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it is 
no AGI







>> Mathematics, though, is interesting in other ways.  I don't believe that 
much of mathematics involves the logical transformations performed in proof 
steps.  A system that invents new fields of mathematics, new terms, new 
mathematical "ideas" -- that is truly interesting.  Inference control is 
boring, but inventing mathematical induction, complex numbers, or ring theory 
-- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation (just in 
a slightly different domain)?


  No system can make those kinds of inventions without sophisticated inference 
control.  Concept creation of course is required also, though.

  -- Ben
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Most of what I was thinking of and referring to is in Chapter 10.  Gödel's 
Quintessential Strange Loop (pages 125-145 in my version) but I would 
suggest that you really need to read the shorter Chapter 9. Pattern and 
Provability (pages 113-122) first.


I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Douglas Hofstadter's newest book I Am A Strange Loop (currently available
from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) 
has

an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete 
formal
system of syntax, that formal system can always be used to convey 
something

(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be added 
to

even inside a formal system of syntax.

This is why I contend that language translation ends up being 
AGI-complete

(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:


It looks like all this "disambiguation" by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building 
.

. .
.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in
and
ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy
and
word/label and then modify it to fit.  That's why you then later need 
the

much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a
template
or mold that you can use as scaffolding while you develop NLU. 
Children

start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the
resources
of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
My point was meant to be that  control is part of 
effective concept creation.  You had phrased it as if concept creation was an 
additional necessity on top of inference control.

But I think we're reaching the point of silliness here . . . .   
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 6:35 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it is 
no AGI



  all these words ...  "inference", "control", "concept", "creation" ... are 
inadequately specified in natural language so misunderstandings will be easy to 
come by.  However, I don't have time to point out the references to my 
particular intended definitions..

  I did not mean to imply that the control involved would be entirely in the 
domain of inference, even when inference is broadly construed... just that 
control of inference, broadly construed, is a key aspect...

  ben g


  On Wed, Oct 22, 2008 at 5:41 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> No system can make those kinds of inventions without sophisticated 
inference control.  Concept creation of course is required also, though.

I'd argue that this is bad phrasing.  

Sure, effective control is necessary to create the concepts that you need 
to fulfill your goals (as opposed to far too many random unuseful concepts) . . 
. . 

But it isn't "Concept creation of course is required also", it really is 
"Effective control is necessary for effective concept creation which is 
necessary for effective goal fulfillment."

And assuming that control must be sophisticated and necessarily entirely in 
the realm of inference are just assumptions . . . .   :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 3:54 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it 
is no AGI







>> Mathematics, though, is interesting in other ways.  I don't believe 
that much of mathematics involves the logical transformations performed in 
proof steps.  A system that invents new fields of mathematics, new terms, new 
mathematical "ideas" -- that is truly interesting.  Inference control is 
boring, but inventing mathematical induction, complex numbers, or ring theory 
-- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation 
(just in a slightly different domain)?


  No system can make those kinds of inventions without sophisticated 
inference control.  Concept creation of course is required also, though.

  -- Ben
   



--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser

I have already proved something stronger


What would you consider your best reference/paper outlining your arguments? 
Thanks in advance.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 8:55 PM
Subject: Re: AW: AW: [agi] Language learning (was Re: Defining AGI)



--- On Wed, 10/22/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:


You make the implicit assumption that a natural language
understanding system will pass the turing test. Can you prove this?


If you accept that a language model is a probability distribution over 
text, then I have already proved something stronger. A language model 
exactly duplicates the distribution of answers that a human would give. 
The output is indistinguishable by any test. In fact a judge would have 
some uncertainty about other people's language models. A judge could be 
expected to attribute some errors in the model to normal human variation.



Furthermore,  it is just an assumption that the ability to
have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and
3b.


I suppose you are right. Instead of encoding mathematical rules as a 
grammar, with enough training data you can just code all possible 
instances that are likely to be encountered. For example, instead of a 
grammar rule to encode the commutative law of addition,


 5 + 3 = a + b = b + a = 3 + 5

a model with a much larger training data set could just encode instances 
with no generalization:


 12 + 7 = 7 + 12
 92 + 0.5 = 0.5 + 92
 etc.

I believe this is how Google gets away with brute force n-gram statistics 
instead of more sophisticated grammars. It's language model is probably 
10^5 times larger than a human model (10^14 bits vs 10^9 bits). Shannon 
observed in 1949 that random strings generated by n-gram models of English 
(where n is the number of either letters or words) look like natural 
language up to length 2n. For a typical human sized model (1 GB text), n 
is about 3 words. To model strings longer than 6 words we would need more 
sophisticated grammar rules. Google can model 5-grams (see 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html ) 
, so it is able to generate and recognize (thus appear to understand) 
sentences up to about 10 words.



By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm


It would be interesting to see a version of the Turing test where the 
human confederate, machine, and judge all have access to a computer with 
an internet connection. I wonder if this intelligence augmentation would 
make the test easier or harder to pass?




-Matthias


> 3) you apply rules such as 5 * 7 = 35 -> 35 / 7 = 5
but
> you have not shown that
> 3a) that a language understanding system
necessarily(!) has
> this rules
> 3b) that a language understanding system
necessarily(!) can
> apply such rules

It must have the rules and apply them to pass the Turing
test.

-- Matt Mahoney, [EMAIL PROTECTED]



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser
But, I still do not agree with the way you are using the incompleteness 
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm a 
little at a loss here . . . .


It is important to distinguish between two different types of 
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify 
something.
2. Godelian Incompleteness-- a logical theory fails to completely specify 
something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the 
difference between Normal and Godelian incompleteness is based upon our 
desires.  I think I'm having a complete disconnect with your intended 
meaning.



However, it seems like all you need is type 1 completeness for what

you are saying.

So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and I 
didn't destroy anything else, is that wrong?  :-)


It seems as if you're not arguing with my conclusion but saying that my 
arguments were way better than they needed to be (like I'm being 
over-efficient?) . . . .


= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just 
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way that 
I am using the theory because the theory is overkill for what is necessary.


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness 
theorem.


It is important to distinguish between two different types of 
incompleteness.


1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
"semantics" is used. If a logic's provably-true statements don't match
up to its semantically-true statements, it is incomplete.

However, it seems like all you need is type 1 completeness for what
you are saying. Nobody claims that there is a complete, well-defined
semantics for natural language against which we could measure the
"provably-true" (whatever THAT would mean).

So, Godel's theorem is way overkill here in my opinion.

--Abram

On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Most of what I was thinking of and referring to is in Chapter 10.  Gödel's
Quintessential Strange Loop (pages 125-145 in my version) but I would
suggest that you really need to read the shorter Chapter 9. Pattern and
Provability (pages 113-122) first.

I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser <[EMAIL PROTECTED]> wrote:


Douglas Hofstadter's newest book I Am A Strange Loop (currently 
available

from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM)
has
an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete
formal
system of syntax, that formal system can always be used to convey
something
(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be 
added

to
even inside a formal system of syntax.

This is why I contend that language translation ends up being
AGI-complete
(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: "Abram Demski" 
<[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]>
wrote:


It looks like a

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Mark Waser
Hi.  I don't understand the following statements.  Could you explain it some 
more?

- Natural language can be learned from examples. Formal language can not.

I think that you're basing this upon the methods that *you* would apply to each 
of the types of language.  It makes sense to me that because of the 
regularities of a formal language that you would be able to use more effective 
methods -- but it doesn't mean that the methods used on natural language 
wouldn't work (just that they would be as inefficient as they are on natural 
languages.

I would also argue that the same argument applies to the first statement of 
following the following two.

- Formal language must be parsed before it can be understood. Natural language 
must be understood before it can be parsed.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 9:23 PM
  Subject: Lojban (was Re: [agi] constructivist issues)


Why would anyone use a simplified or formalized English (with regular 
grammar and no ambiguities) as a path to natural language understanding? Formal 
language processing has nothing to do with natural language processing other 
than sharing a common lexicon that make them appear superficially similar.

- Natural language can be learned from examples. Formal language can 
not.
- Formal language has an exact grammar and semantics. Natural language 
does not.
- Formal language must be parsed before it can be understood. Natural 
language must be understood before it can be parsed.
- Formal language is designed to be processed efficiently on a fast, 
reliable, sequential computer that neither makes nor tolerates errors, between 
systems that have identical, fixed language models. Natural language evolved to 
be processed efficiently by a slow, unreliable, massively parallel computer 
with enormous memory in a noisy environment between systems that have different 
but adaptive language models.

So how does yet another formal language processing system help us 
understand natural language? This route has been a dead end for 50 years, in 
spite of the ability to always make some initial progress before getting stuck.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 10/22/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

  From: Ben Goertzel <[EMAIL PROTECTED]>
  Subject: Re: [agi] constructivist issues
  To: agi@v2.listbox.com
  Cc: [EMAIL PROTECTED]
  Date: Wednesday, October 22, 2008, 12:27 PM



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled 
via reference to WordNet via usages like run_1, run_2, etc. ... or as you say 
by using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be 
handled in a similar way, e.g. by defining an ontology of preposition meanings 
like with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts 
of subscripts, and in this way to recognize a highly controlled English that 
would be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple 
sentences, so the only real hassle to deal with is disambiguation.   We could 
use similar hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with 
an AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



      On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too 
integral to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always 
use big words and never use small words and/or you use a specific phrase as a 
"word".  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

 

Re: [agi] Understanding and Problem Solving

2008-10-23 Thread Mark Waser
I like that.  NLU isn't AGI-complete but achieving it is (if you've got a 
vaguely mammalian-brain-like architecture   :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Thursday, October 23, 2008 10:18 AM
  Subject: **SPAM** Re: [agi] Understanding and Problem Solving



  On whether NLU is AGI-complete, it really depends on the particulars of the 
definition of NLU ... but according to my own working definition of NLU I agree 
that it isn't ... 

  However, as I stated before, within any vaguely mammalian-brain-like AI 
architecture, I do suspect that achieving NLU is AGI-complete...

  -- Ben G



  On Thu, Oct 23, 2008 at 10:12 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> 
wrote:

I do not agree. Understanding a domain does not imply the ability to solve 
problems in that domain.

And the ability to solve problems in a domain even does not imply to have a 
generally a deeper understanding of that domain.



Once again my example of the problem to find a path within a graph from 
node A to node B:

Program p1 (= problem solver) can find a path.

Program p2  (= expert in understanding) can verify and analyze paths.



For instance, p2 could be able compare the length of the path for the first 
half of the nodes with the length of the path for the second half of the nodes. 
It is not necessary that  P1 can do this as well.



P2 can not necessarily find a path. But p1 can not necessarily analyze its 
solution.



Understanding  and problem solving are different things which might have a 
common subset but it is wrong that the one implies the other one or vice versa.



And that's the main reason why natural language understanding is not 
necessarily AGI-complete.



-Matthias





Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:






  Once again, there is a depth to understanding - it's not simply a 
binary proposition.

  Don't you agree that a grandmaster understands chess better than you 
do, even if his moves are understandable to you in hindsight?

  If I'm not good at math, I might not be able to solve y=3x+4 for x, 
but I might understand that y equals 3 times x plus four. My understanding is 
superficial compared to someone who can solve for x. 

  Finally, don't you agree that understanding natural language requires 
solving problems? If not, how would you account for an AI's ability to 
understand novel metaphor? 

  Terren

  --- On Thu, 10/23/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

  From: Dr. Matthias Heger <[EMAIL PROTECTED]>
  Subject: [agi] Understanding and Problem Solving
  To: agi@v2.listbox.com
  Date: Thursday, October 23, 2008, 1:47 AM

  Terren Suydam wrote:

  >>>  

  Understanding goes far beyond mere knowledge - understanding *is* the 
ability to solve problems. One's understanding of a situation or problem is 
only as deep as one's (theoretical) ability to act in such a way as to achieve 
a desired outcome. 

  <<<  



  I disagree. A grandmaster of chess can explain his decisions and I 
will understand them. Einstein could explain his theory to other physicist(at 
least a subset) and they could understand it.



  I can read a proof in mathematics and I will understand it – because 
I only have to understand (= check) every step of the proof.



  Problem solving is much much more than only understanding.

  Problem solving is the ability to *create* a sequence of actions to 
change a system's state from A to a desired state B.



  For example: Problem Find a path from A to B within a graph.

  An algorithm which can check a solution and can answer details about 
the solution is not necessarily able to find a solution.



  -Matthias






--

agi | Archives | Modify Your Subscription
   
   
 






  agi | Archives | Modify Your Subscription
 
 





  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--

Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.


OK.  Personally, I think that you did a good job of defining Godelian 
Incompleteness this time but arguably you did it by reference and by 
building a new semantic structure as you went along.


On the other hand, you now seem to be arguing that my thinking that 
linguistic vagueness comes from Godelian incompleteness is wrong because 
Godelian incompleteness can't be defined . . . .


I'm sort of at a loss as to how to proceed from here.  If Godelian 
Incompleteness can't be defined, then by definition I can't prove anything 
but you can't disprove anything.


This is nicely Escheresque and very Hofstadterian but . . . .


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 23, 2008 11:54 AM
Subject: Re: [agi] constructivist issues



Mark,

My type 1 & 2 are probably the source of your confusion, since I
phrased them so that (as you said) they depend on "intention".
Logicians  codify the intension using semantics, so it is actually
well defined, even though it sounds messy. But, since that explanation
did not work well, let me try to put it a completely different way
rather than trying to better explain the difference between 1 and 2.

Godel's incompleteness theorem says that any logic with a sufficiently
strong semantics will be syntactically incomplete; there will be
theorems that are true according to the semantics, but based on
allowed proofs, they will be neither true nor false. So Godel's
theorem is about an essential lack of match-up between proof and
truth, or as is often said, syntax and semantics.

To apply the theorem to natural language, we've got to identify the
syntax and semantics: the notions of "proof" and "truth"
that apply. But in attempting to define these, we will run into some
serious problems: proof and truth in natural language is only
partially defined. Furthermore, those "serious problems" are (it seems
to me) precisely what you are referring to.

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.

--Abram

On Thu, Oct 23, 2008 at 9:54 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

But, I still do not agree with the way you are using the incompleteness
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm 
a

little at a loss here . . . .


It is important to distinguish between two different types of
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify
something.
2. Godelian Incompleteness-- a logical theory fails to completely 
specify

something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the
difference between Normal and Godelian incompleteness is based upon our
desires.  I think I'm having a complete disconnect with your intended
meaning.


However, it seems like all you need is type 1 completeness for what


you are saying.


So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and 
I

didn't destroy anything else, is that wrong?  :-)

It seems as if you're not arguing with my conclusion but saying that my
arguments were way better than they needed to be (like I'm being
over-efficient?) . . . .

= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way 
that
I am using the theory because the theory is overkill for what is 
necessary.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness
theorem.

It is important to distinguish between two different types of
incompleteness.

1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
"semantics" is 

Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of "provably true" and
"semantically true" for natural language). Does that make sense, or am
I still confusing?


It makes sense but I'm arguing that you're making my point for me . . . .


agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...


Godel's incompleteness theorem tells us important limitations of all formal 
*and complete* approaches and systems (like logic).  It clearly means that 
any approach to AI is going to have to be open-ended (Godellian-incomplete? 
;-)


It emphatically does *not* tell us anything about "any approach that can be 
implemented on normal computers" and this is where all the people who insist 
that "because computers operate algorithmically, they will never achieve 
true general intelligence" are going wrong.


The later argument is similar to saying that because an inductive 
mathematical proof always operates only on just the next number, it will 
never successfully prove anything about infinity.  I'm a firm believe in 
inductive proofs and the fact that general intelligences can be implemented 
on the computers that we have today.


You are correct in saying that Godel's theory has been improperly overused 
and abused over the years but my point was merely that AGI is Godellian 
Incomplete, natural language is Godellian Incomplete, and effectively 
AGI-Complete most probably pretty much exactly means Godellian-Incomplete. 
(Yes, that is a radically new phrasing and not necessarily quite what I 
mean/meant but . . . . ).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 23, 2008 11:42 PM
Subject: Re: [agi] constructivist issues



Mark,

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of "provably true" and
"semantically true" for natural language). Does that make sense, or am
I still confusing?

Matthias,

I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram

On Thu, Oct 23, 2008 at 4:07 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.


OK.  Personally, I think that you did a good job of defining Godelian
Incompleteness this time but arguably you did it by reference and by
building a new semantic structure as you went along.

On the other hand, you now seem to be arguing that my thinking that
linguistic vagueness comes from Godelian incompleteness is wrong because
Godelian incompleteness can't be defined . . . .

I'm sort of at a loss as to how to proceed from here.  If Godelian
Incompleteness can't be defined, then by definition I can't prove 
anything

but you can't disprove anything.

This is nicely Escheresque and very Hofstadterian but . . . .


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, October 23, 2008 11:54 AM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser

No Mike. AGI must be able to discover regularities of all kind in all
domains.


Must it be able to *discover* regularities or must it be able to be taught 
and subsequently effectively use regularities?  I would argue the latter. 
(Can we get a show of hands of those who believe the former?  I think that 
it's a small minority but . . . )



If you can find a single domain where your AGI fails, it is no AGI.


Failure is an interesting evaluation.  Ben's made it quite clear that 
advanced science is a domain that stupid (if not non-exceptional) humans 
fail at.  Does that mean that most humans aren't general intelligences?



Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.


Chess is a good milestone because of it's very difficulty.  The reason why 
human's learn chess so easily (and that is a relative term) is because they 
already have an excellent spatial domain model in place, a ton of strategy 
knowledge available from other learned domains, and the immense array of 
mental tools that we're going to need to bootstrap an AI.  Chess as a GI 
task (or, via a GI approach) is emphatically NOT easily programmable.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:09 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI





No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.

Of course it is not sufficient for AGI. But before you think about
sufficient features, necessary abilities are good milestones to verify
whether your roadmap towards AGI will not go into a dead-end after a long
way of vague hope, that future embodied experience will solve your 
problems

which you cannot solve today.

- Matthias



Mike wrote
P.S. Matthias seems to be cheerfully cutting his own throat here. The idea
of a single domain AGI  or pre-AGI is a contradiction in terms every which
way - not just in terms of domains/subjects or fields, but also sensory
domains.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
The limitations of Godelian completeness/incompleteness are a subset of 
the much stronger limitations of finite automata.


Can we get a listing of what you believe these limitations are and whether 
or not you believe that they apply to humans?


I believe that humans are constrained by *all* the limits of finite automata 
yet are general intelligences so I'm not sure of your point.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:09 AM
Subject: AW: [agi] constructivist issues


The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata.

If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.

I think, this comparison is adequate to evaluate the role of Gödel's theorem
for AGI.

- Matthias




Abram Demski [mailto:[EMAIL PROTECTED] wrote


I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.


So . . . wouldn't dolphins and bats use sonar patterns to play chess?

So . . . is it *vision* or is it the most developed (for the individual), 
highest bandwidth sensory modality that allows the creation and update of a 
competent domain model?


Humans usually do use vision . . . . Sonar may prove to be more easily 
implemented for AGI.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:30 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI



This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.

Since I do not know any good blind chess player I would suspect that 
visual

patterns are better for chess
then those patterns which are used by blind people.

http://www.psych.utoronto.ca/users/reingold/publications/Reingold_Charness_P
omplun_&_Stampe_press/


http://www.psychology.gatech.edu/create/pubs/reingold&charness_perception-in
-chess_2005_underwood.pdf


Von: Trent Waddington [mailto:[EMAIL PROTECTED] wrote

http://www.eyeway.org/inform/sp-chess.htm

Trent




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
>> E.g. according to this, AIXI (with infinite computational power) but not 
>> AIXItl
>> would have general intelligence, because the latter can only find 
>> regularities
>> expressible using programs of length bounded by l and runtime bounded
>> by t



I hate AIXI because not only does it have infinite computational power but 
people also unconsciously assume that it has infinite data (or, at least, 
sufficient data to determine *everything*).

AIXI is *not* a general intelligence by any definition that I would use.  It is 
omniscient and need only be a GLUT (giant look-up table) and I argue that that 
is emphatically *NOT* intelligence.  

AIXI may have the problem-solving capabilities of general intelligence but does 
not operate under the constraints that *DEFINE* a general intelligence.  If it 
had to operate under those constraints, it would fail, fail, fail.

AIXI is useful for determining limits but horrible for drawing other types of 
conclusions about GI.




  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 5:02 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI





  On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:


No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.


  According to this definition **no finite computational system can be an AGI**,
  so this is definition obviously overly strong for any practical purposes

  E.g. according to this, AIXI (with infinite computational power) but not 
AIXItl
  would have general intelligence, because the latter can only find regularities
  expressible using programs of length bounded by l and runtime bounded
  by t

  Unfortunately, the pragmatic notion of AGI we need to use as researchers is
  not as simple as the above ... but fortunately, it's more achievable ;-)

  One could view the pragmatic task of AGI as being able to discover all 
regularities
  expressible as programs with length bounded by l and runtime bounded by t ...
  [and one can add a restriction about the resources used to make this
  discover], but the thing is, this depends highly on the underlying 
computational model,
  which then can be used to encode some significant "domain bias."

  -- Ben G
   




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   9   >