Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser

I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.


Or maybe another way to rephrase it is combine it with another thread . . . 
.


Any individual piece of science is understandable/teachable to (or my 
original point -- verifiable or able to be validated by) any general 
intelligence but the totally of science combined with the world is far too 
large to . . . . (which is effectively Ben's point) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Ben Goertzel
In brief --> You've agreed that even a stupid person is a general
> intelligence.  By "do science", I (originally and still) meant the
> amalgamation that is probably best expressed as a combination of critical
> thinking and/or the scientific method.  My point was a combination of both
> a) to be a general intelligence, you really must have a domain model and the
> rudiments of critical thinking/scientific methodology in order to be able to
> competently/effectively update it and b) if you're a general intelligence,
> even if you don't need it, you should be able to be taught the rudiments of
> critical thinking/scientific methodology.
>
> Are those points that you would agree with?
>


The rudiments, yes.

But the rudiments are not enough to perform effectively by accepted
standards ... e.g. they are not enough to avoid getting fired from your job
as a scientist... unless it's a government job ;-)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
>> However, the point I took issue with was your claim that a stupid person 
>> could be taught to effectively do science ... or (your later modification) 
>> evaluation of scientific results.
>> At the time I originally took exception to your claim, I had not read the 
>> earlier portion of the thread, and I still haven't; so I still do not know 
>> why you made the claim in the first place.

In brief --> You've agreed that even a stupid person is a general intelligence. 
 By "do science", I (originally and still) meant the amalgamation that is 
probably best expressed as a combination of critical thinking and/or the 
scientific method.  My point was a combination of both a) to be a general 
intelligence, you really must have a domain model and the rudiments of critical 
thinking/scientific methodology in order to be able to competently/effectively 
update it and b) if you're a general intelligence, even if you don't need it, 
you should be able to be taught the rudiments of critical thinking/scientific 
methodology.  

Are those points that you would agree with?  (A serious question -- and, in 
particular, if you don't agree, I'd be very interested in why since I'm trying 
to arrive at a reasonable set of distinctions that define a general 
intelligence).

In typical list fashion, rather than asking what I meant (or, in your case, 
even having the courtesy to read what came before -- so that you might have 
*some* chance of understanding what I was trying to get at -- in case my 
immediate/proximate phrasing was as awkward as I'll freely admit that it was 
;-), it effectively turned into an argument past each other when your immediate 
concept/interpretation of *science = advanced statistical interpretation* hit 
the blindingly obvious shoals of it's not easy teaching stupid people 
complicated things (I mean -- seriously, dude --do you *really* think that I'm 
going to be that far off base?  And, if not, why disrupt the conversation so 
badly by coming in in such a fashion?)..

(And I have to say --> As list owner, it would be helpful if you would set a 
good example of reading threads and trying to understand what people meant 
rather than immediately coming in and flinging insults and accusations of 
ignorance e.g.  "This is obviously spoken by someone who has never . . . . ").

So . . . . can you agree with the claim as phrased above?  (i.e. What were we 
disagreeing on again? ;-)

Oh, and the original point was part of a discussion about the necessary and 
sufficient pre-requisites for general intelligence so it made sense to 
(awkwardly :-) say that a domain model and the rudiments of critical 
thinking/scientific methodology are a (major but not complete) part of that.

  - Original Message ----- 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 8:51 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark W wrote:


What were we disagreeing on again?


  This conversation has drifted into interesting issues in the philosophy of 
science, most of which you and I seem to substantially agree on.

  However, the point I took issue with was your claim that a stupid person 
could be taught to effectively do science ... or (your later modification) 
evaluation of scientific results.

  At the time I originally took exception to your claim, I had not read the 
earlier portion of the thread, and I still haven't; so I still do not know why 
you made the claim in the first place.

  ben




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark W wrote:

What were we disagreeing on again?
>


This conversation has drifted into interesting issues in the philosophy of
science, most of which you and I seem to substantially agree on.

However, the point I took issue with was your claim that a stupid person
could be taught to effectively do science ... or (your later modification)
evaluation of scientific results.

At the time I originally took exception to your claim, I had not read the
earlier portion of the thread, and I still haven't; so I still do not know
why you made the claim in the first place.

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
Post #101 :V


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
"Post #101 :V"

Somehow this hit the wrong thread :|


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Incorrect things are wrapped up with correct things in peoples' minds



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem.


Um.  No.

I'm thinking that in order to integrate a new idea into your world model, 
you first have to resolve all the conflicts that it has with the existing 
model.  That could be incredibly expensive.


(And intelligence is emphatically not linear)

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.


I'm sure that Ben was saying that for doing discovery . . . . and I agree.

For evaluation, I'm not sure that we've come to closure on what either of us 
think . . . .   :-)




- Original Message - 
From: "BillK" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 21, 2008 5:50 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:


Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...




Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

AI!   :-)

This is what I was trying to avoid.   :-)

My objection starts with "How is a Bayes net going to do feature 
extraction?"


A Bayes net may be part of a final solution but as you even indicate, it's 
only going to be part . . . .


- Original Message - 
From: "Eric Burton" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 21, 2008 4:51 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Wow!  Way too much good stuff to respond to in one e-mail.  I'll try to respond 
to more in a later e-mail but . . . . (and I also want to get your reaction to 
a few things first :-)

>> However, I still don't think that a below-average-IQ human can pragmatically 
>> (i.e., within the scope of the normal human lifetime) be taught to 
>> effectively carry out statistical evaluation of theories based on data, 
>> given the realities of how theories are formulated and how data is obtained 
>> and presented, at the present time...

Hmmm.  After some thought, I have to start by saying that it looks like you're 
equating science with statistics and I've got all sorts of negative reactions 
to that.

First -- Sure.  I certainly have to agree for a below-average-IQ human and 
could even be easily convinced for an average IQ human if they had to do it all 
themselves.  And then, statistical packages quickly turn into a two-edged sword 
where people blindly use heuristics without understanding them (p < .05 
anyone?).

A more important point, though, is that humans natively do *NOT* use statistics 
but innately use very biased, non-statistical methods that *arguably* function 
better than statistics in real world data environments.   That alone would 
convince me that I certainly don't want to say that science = statistics.

>> I am not entirely happy with Lakatos's approach either.  I find it 
>> descriptively accurate yet normatively inadequate.

Hmmm.  (again)  To me that seems to be an interesting way of rephrasing our 
previous disagreement except that you're now agreeing with me.  (Gotta love it 
:-)

You find Lakatos's approach descriptively accurate?  Fine, that's the 
scientific method.  

You find it normatively inadequate?  Well, duh (but meaning no offense :-) . . 
. . you can't codify the application of the scientific method to all cases.  I 
easily agreed to that before.

What were we disagreeing on again?


>> My own take is that science normatively **should** be based on a Bayesian 
>> approach to evaluating theories based on data

That always leads me personally to the question "Why do humans operate on the 
biases that they do rather than Bayesian statistics?"  MY *guess*  is that 
evolution COULD have implemented Bayesian methods but that the current methods 
are more efficient/effective under real world conditions (i.e. because of the 
real-world realities of feature extraction under dirty and incomplete or 
contradictory data and the fact that the Bayesian approach really does need to 
operate in an incredibly data-rich world where the features have already been 
extracted and ambiguities, other than occurrence percentages, are basically 
resolved).

**And adding different research programmes and/or priors always seems like such 
a kludge . . . . . 






  - Original Message ----- 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 4:15 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,


>> As you asked for references I will give you two:

Thank you for setting a good example by including references but the 
contrast between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).

  I read that book but didn't like it as much ... but you're right, it may be 
an easier place for folks to start...
   
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 

  All good stuff indeed.
   
My view is basically that of Lakatos to the extent that I would challenge 
you to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism ) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).

  Feyerabend appeals to my sense of humor ... I liked the guy.  I had some 
correspondence with him when I was 18.  I wrote him a letter outlining some of 
my ideas on philosophy of mind and asking his advice on where I should go to 
grad school to study philosophy.  He replied telling me that if I wanted to be 
a real philosopher I should **not** study philosophy acade

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread BillK
On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:
>
> Incorrect things are wrapped up with correct things in peoples' minds
>
> However, pure slowness at learning is another part of the problem ...
>


Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such checklist.
The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 5:20 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  >> But, by the time she overcame every other issue in the way of really
> understanding science, her natural lifespan would have long been
> overspent...
> You know, this is a *really* interesting point.  Effectively what you're
> saying (I believe) is that the difficulty isn't in learning but in
> UNLEARNING incorrect things that actively prevent you (via conflict) from
> learning correct things.  Is this a fair interpretation?
>

I think that's a large part of it

Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...



>
> It's also particularly interesting when you compare it to information
> theory where the sole cost is in erasing a bit, not in setting it.
>
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Tuesday, October 21, 2008 2:56 PM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
> Hmm...
>
> I think that non-retarded humans are fully general intelligences in the
> following weak sense: for any fixed t and l, for any human there are some
> numbers M and T so that if the human is given amount M of external memory
> (e.g. notebooks to write on), that human could be taught to emulate AIXItl
>
> [see
> http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8&s=books&qid=1224614995&sr=1-1,
>  or the relevant papers on Marcus Hutter's website]
>
> where each single step of AIXItl might take up to T seconds.
>
> This is a kind of generality that I think no animals but humans have.  So,
> in that sense, we seem to be the first evolved general intelligences.
>
> But, that said, there are limits to what any one of us can learn in a fixed
> finite amount of time.   If you fix T realistically then our intelligence
> decreases dramatically.
>
> And for the time-scales relevant in human life, it may not be possible to
> teach some people to do science adequately.
>
> I am thinking for instance of a 40 yr old student I taught at the
> University of Nevada way back when (normally I taught advanced math, but in
> summers I sometimes taught remedial stuff for extra $$).  She had taken
> elementary algebra 7 times before ... and had had extensive tutoring outside
> of class ... but I still was unable to convince her of the incorrectness of
> the following reasoning: "The variable a always stands for 1.  The variable
> b always stands for 2. ... The variable z always stands for 26."   She was
> not retarded.  She seemed to have a mental block against algebra.  She could
> discuss politics and other topics with seeming intelligence.  Eventually I'm
> sure she could have been taught to overcome this block.  But, by the time
> she overcame every other issue in the way of really understanding science,
> her natural lifespan would have long been overspent...
>
> -- Ben G
>
>
> On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>>  >> Yes, but each of those steps is very vague, and cannot be boiled down
>> to a series of precise instructions sufficient for a stupid person to
>> consistently carry them out effectively...
>> So -- are those stupid people still general intelligences?  Or are they
>> only general intelligences to the degree to which they *can* carry them
>> out?  (because I assume that you'd agree that general intelligence is a
>> spectrum like any other type).
>>
>> There also remains the distinction (that I'd like to highlight and
>> emphasize) between a discoverer and a learner.  The cognitive
>> skills/intelligence necessary to design questions, hypotheses, experiments,
>> etc. are far in excess the cognitive skills/intelligence necessary to
>> evaluate/validate those things.  My argument was meant to be that a general
>> intelligence needs to be a learner-type rather than a discoverer-type
>> although the discoverer type is clearly more effective.
>>
>> So -- If you can't correctly evaluate data, are you a general
>> intelligence?  How do you get an accurate and effective domain model to
>> achieve competence in a domain if you don't know who or what to believe?  If
>> you don't believe in evolution, does that mean that you aren't a general
>> intelligence in that particular realm/domain (biology)?
>>
>> >> Also, those steps are heuristic and do not cover all cases.  For
>> instance step 4 requires experimentation, yet there are sciences such as
>>

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
>> But, by the time she overcame every other issue in the way of really 
>> understanding science, her natural lifespan would have long been overspent...

You know, this is a *really* interesting point.  Effectively what you're saying 
(I believe) is that the difficulty isn't in learning but in UNLEARNING 
incorrect things that actively prevent you (via conflict) from learning correct 
things.  Is this a fair interpretation?

It's also particularly interesting when you compare it to information theory 
where the sole cost is in erasing a bit, not in setting it.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 2:56 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Hmm...

  I think that non-retarded humans are fully general intelligences in the 
following weak sense: for any fixed t and l, for any human there are some 
numbers M and T so that if the human is given amount M of external memory (e.g. 
notebooks to write on), that human could be taught to emulate AIXItl

  [see 
http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8&s=books&qid=1224614995&sr=1-1
 , or the relevant papers on Marcus Hutter's website]

  where each single step of AIXItl might take up to T seconds.

  This is a kind of generality that I think no animals but humans have.  So, in 
that sense, we seem to be the first evolved general intelligences.

  But, that said, there are limits to what any one of us can learn in a fixed 
finite amount of time.   If you fix T realistically then our intelligence 
decreases dramatically.

  And for the time-scales relevant in human life, it may not be possible to 
teach some people to do science adequately.

  I am thinking for instance of a 40 yr old student I taught at the University 
of Nevada way back when (normally I taught advanced math, but in summers I 
sometimes taught remedial stuff for extra $$).  She had taken elementary 
algebra 7 times before ... and had had extensive tutoring outside of class ... 
but I still was unable to convince her of the incorrectness of the following 
reasoning: "The variable a always stands for 1.  The variable b always stands 
for 2. ... The variable z always stands for 26."   She was not retarded.  She 
seemed to have a mental block against algebra.  She could discuss politics and 
other topics with seeming intelligence.  Eventually I'm sure she could have 
been taught to overcome this block.  But, by the time she overcame every other 
issue in the way of really understanding science, her natural lifespan would 
have long been overspent...

  -- Ben G



  On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Yes, but each of those steps is very vague, and cannot be boiled down to 
a series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they 
only general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and 
emphasize) between a discoverer and a learner.  The cognitive 
skills/intelligence necessary to design questions, hypotheses, experiments, 
etc. are far in excess the cognitive skills/intelligence necessary to 
evaluate/validate those things.  My argument was meant to be that a general 
intelligence needs to be a learner-type rather than a discoverer-type although 
the discoverer type is clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence? 
 How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

>> Also, those steps are heuristic and do not cover all cases.  For 
instance step 4 requires experimentation, yet there are sciences such as 
cosmology and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than 
physical experiments but it's still all about predictive power.  What is that 
next star/dinosaur going to look like?  What is it *never* going to look like 
(or else we need to expand or correct our theory)?  Is there anything that we 
can guess that we haven't tested/seen yet that we can verify?  What else is 
science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
Mark,

>> As you asked for references I will give you two:
> Thank you for setting a good example by including references but the
> contrast between the two is far better drawn in *For and Against 
> Method*(ISBN
> 0-226-46774-0
> ).
>

I read that book but didn't like it as much ... but you're right, it may be
an easier place for folks to start...


> Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for
> those who wish to educate themselves in the fundamentals of Philosophy of
> Science
>

All good stuff indeed.


> My view is basically that of Lakatos to the extent that I would challenge
> you to find anything in Lakatos that promotes your view over the one that
> I've espoused here.  Feyerabend's rants alternate between criticisms
> ultimately based upon the fact that what society frequently calls science
> is far more politics (see sociology of scientific knowledge); a 
> Tintnerian/Anarchist
> rant against structure and formalism; and incorrect portrayals/extensions of
> Lakatos (just like this list ;-).  Where he is correct is in the first
> case where society is not doing science correctly (i.e. where he provided
> examples regarded as indisputable instances of progress and showed how the
> political structures of the time fought against or suppressed them).  But
> his rants against structure and formalism (or, purportedly, for freedom and
> humanitarianism ) are simply garbage in my opinion (though I'd guess
> that they appeal to you ;-).
>

Feyerabend appeals to my sense of humor ... I liked the guy.  I had some
correspondence with him when I was 18.  I wrote him a letter outlining some
of my ideas on philosophy of mind and asking his advice on where I should go
to grad school to study philosophy.  He replied telling me that if I wanted
to be a real philosopher I should **not** study philosophy academically nor
become a philosophy professor, but should study science or arts and then
pursue philosophy independently.  We chatted back and forth a little after
that.

I think Feyerabend did a good job of poking holes in some simplistic
accounts of scientific process, but ultimately I found Lakatos's arguments
mostly more compelling...

Lakatos did not argue for any one scientific method, as I recall.  Rather he
argued that different research programmes come with different methods, and
that the evaluation of a given piece of data is meaningful only within a
research programme, not generically.  He argued that comparative evaluation
of scientific theories is well-defined only for theories within the same
programme, and otherwise one has to talk about comparative evaluation of
whole scientific research programmes.

I am not entirely happy with Lakatos's approach either.  I find it
descriptively accurate yet normatively inadequate.

My own take is that science normatively **should** be based on a Bayesian
approach to evaluating theories based on data, and that different research
programmes then may be viewed as corresponding to different **priors** to be
used in doing Bayesian statistical evaluations.  I think this captures a lot
of Lakatos's insights but within a sound statistical framework.  This is my
"social computational probabilistic" philosophy of science.  The "social"
part is that each social group, corresponding to a different research
programme, has its own prior distribution.

I have also, more recently, posited a sort of "universal prior", defined as
**simplicity of communication in natural language within a certain
community**.  This, I suggest, provides a baseline prior apart from any
particular research programme.

However, I still don't think that a below-average-IQ human can pragmatically
(i.e., within the scope of the normal human lifetime) be taught to
effectively carry out statistical evaluation of theories based on data,
given the realities of how theories are formulated and how data is obtained
and presented, at the present time...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
l:BookSources/0226467740>
> ).
> Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for
> those who wish to educate themselves in the fundamentals of Philosophy of
> Science
> (you didn't really forget that my undergraduate degree was a dual major of
> Biochemistry and Philosophy of Science, did you? :-).
>
> My view is basically that of Lakatos to the extent that I would challenge
> you to find anything in Lakatos that promotes your view over the one that
> I've espoused here.  Feyerabend's rants alternate between criticisms
> ultimately based upon the fact that what society frequently calls science
> is far more politics (see sociology of scientific knowledge); a 
> Tintnerian/Anarchist
> rant against structure and formalism; and incorrect portrayals/extensions of
> Lakatos (just like this list ;-).  Where he is correct is in the first
> case where society is not doing science correctly (i.e. where he provided
> examples regarded as indisputable instances of progress and showed how the
> political structures of the time fought against or suppressed them).  But
> his rants against structure and formalism (or, purportedly, for freedom and
> humanitarianism ) are simply garbage in my opinion (though I'd guess
> that they appeal to you ;-).
>
>
>
>
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Tuesday, October 21, 2008 10:41 AM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
>
> On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>> Oh, and I *have* to laugh . . . .
>>
>> Hence the wiki entry on scientific method:
>>> "Scientific method is not a recipe: it requires intelligence,
>>> >imagination,
>>>
>> and creativity"
>>
>>> http://en.wikipedia.org/wiki/Scientific_method
>>> This is basic stuff.
>>>
>>
>> In the cited wikipedia entry, the phrase "Scientific method is not a
>> recipe: it requires intelligence, imagination, and creativity" is
>> immediately followed by just such a recipe for the scientific method
>>
>> A linearized, pragmatic scheme of the four points above is sometimes
>> offered as a guideline for proceeding:[25]
>
>
> Yes, but each of those steps is very vague, and cannot be boiled down to a
> series of precise instructions sufficient for a stupid person to
> consistently carry them out effectively...
>
> Also, those steps are heuristic and do not cover all cases.  For instance
> step 4 requires experimentation, yet there are sciences such as cosmology
> and paleontology that are not focused on experimentation.
>
> As you asked for references I will give you two:
>
> Paul Feyerabend, Against Method (a polemic I don't fully agree with, but
> his points need to be understood by those who will talk about scientific
> method)
>
> Imre Lakatos, The Methodology of Scientific Research Programmes (which I do
> largely agree with ... he's a very subtle thinker...)
>
>
>
> ben g
>
>
>>  1.. Define the question
>>  2.. Gather information and resources (observe)
>>  3.. Form hypothesis
>>  4.. Perform experiment and collect data
>>  5.. Analyze data
>>  6.. Interpret data and draw conclusions that serve as a starting point
>> for new hypothesis
>>  7.. Publish results
>>  8.. Retest (frequently done by other scientists)
>>
>>
>>
>> ----- Original Message - From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>
>>
>> To: 
>> Sent: Monday, October 20, 2008 5:00 PM
>> Subject: AW: AW: AW: [agi] Re: Defining AGI
>>
>>
>> If MW would be scientific then he would not have asked Ben to prove that
>> MWs
>> hypothesis is wrong.
>> The person who has to prove something is the person who creates the
>> hypothesis.
>> And MW has given not a tiny argument for his hypothesis that a natural
>> language understanding system can easily be a scientist.
>>
>> -Matthias
>>
>> -Ursprüngliche Nachricht-
>> Von: Eric Burton [mailto:[EMAIL PROTECTED]
>> Gesendet: Montag, 20. Oktober 2008 22:48
>> An: agi@v2.listbox.com
>> Betreff: Re: AW: AW: [agi] Re: Defining AGI
>>
>>
>> You and MW are clearly as philosophically ignorant, as I am in AI.
>>>
>>
>> But MW and I have not agreed on anything.
>>
>> Hence the wiki entry on scientific method:
>>> "Scientific method is not a recipe: it requires intelligence,
>>> >imagination,
>>>
>> and c

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Mark Waser answered to 

>> I don't say that anything is easy.

:

Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.

--





Clearly you DO say that language understanding is easy.
<<<<<<<<

Your claim was that I have said that *anything* is easy.
This is a wrong generalization which is usually known in rhetoric.


I think, often you are less scientific than those people who you blame not
to be scientific.
I will give up to discuss with you.









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Marc Walser wrote


Try to get the name right.  It's just common competence and courtesy.

Before you ask for counter examples you should *first* give some 
arguments which supports your hypothesis. This was my point.


And I believe that I did.  And I note that you didn't even address the fact 
that I did so again in the e-mail you are quoting.  You seem to want to 
address trivia rather than the meat of the argument.  What don't you address 
the core instead of throwing up a smokescreen?



Regarding your example with Darwin:


What example with Darwin?

First, I'd appreciate it if you'd drop the strawman.  You are the only 
one who keeps insisting that anything is "easy".
 Is this a scientific discussion from you? No. You use rhetoric and 
nothing else.


And baseless statements like "You use rhetoric and nothing else" are a 
scientific discussion.  Again with the smokescreen.



I don't say that anything is easy.


Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.


--





Clearly you DO say that language understanding is easy.








This is the first time you speak about pre-requisites.


Direct quote cut and paste from *my* e-mail . . . . .

- Original Message - 
From: "Mark Waser" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 4:01 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I don't think that learning of language is the entire point. If I have
only
learned language I still cannot create anything. A human who can
understand
language is by far still no good scientist. Intelligence means the 
ability

to solve problems. Which problems can a system solve if it can nothing
else
than language understanding?


Many or most people on this list believe that learning language is an
AGI-complete task.  What this means is that the skills necessary for
learning a language are necessary and sufficient for learning any other
task.  It is not that language understanding gives general intelligence
capabilities, but that the pre-requisites for language understanding are
general intelligence (or, that language understanding is isomorphic to
general intelligence in the same fashion that all NP-complete problems are
isomorphic).  Thus, the argument actually is that a system that "can do
nothing else than language understanding" is an oxymoron.


-




Clearly I DO talk about the pre-requisites for language understanding.






Dude.  Seriously.

First you deny your own statements and then claim that I didn't previously 
mention something that it is easily provable that I did (at the top of an 
e-mail).  Check the archives.  It's all there in bits and bytes.


Then you end with a funky pseudo-definition that "Understanding does not 
imply the ability to create something new or to apply knowledge."   What 
*does* understanding mean if you can't apply it?  What value does it have?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
>> Yes, but each of those steps is very vague, and cannot be boiled down to a 
>> series of precise instructions sufficient for a stupid person to 
>> consistently carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they only 
general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and emphasize) 
between a discoverer and a learner.  The cognitive skills/intelligence 
necessary to design questions, hypotheses, experiments, etc. are far in excess 
the cognitive skills/intelligence necessary to evaluate/validate those things.  
My argument was meant to be that a general intelligence needs to be a 
learner-type rather than a discoverer-type although the discoverer type is 
clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence?  
How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

>> Also, those steps are heuristic and do not cover all cases.  For instance 
>> step 4 requires experimentation, yet there are sciences such as cosmology 
>> and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than physical 
experiments but it's still all about predictive power.  What is that next 
star/dinosaur going to look like?  What is it *never* going to look like (or 
else we need to expand or correct our theory)?  Is there anything that we can 
guess that we haven't tested/seen yet that we can verify?  What else is science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
F.  Return to step A with additional leverage

If you were forced to codify the "hard core" of the scientific method, how 
would you do it?

>> As you asked for references I will give you two:

Thank you for setting a good example by including references but the contrast 
between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 
(you didn't really forget that my undergraduate degree was a dual major of 
Biochemistry and Philosophy of Science, did you? :-).

My view is basically that of Lakatos to the extent that I would challenge you 
to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism ) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).




  - Original Message ----- 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 10:41 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





  On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Oh, and I *have* to laugh . . . .



  Hence the wiki entry on scientific method:
  "Scientific method is not a recipe: it requires intelligence, 
>imagination,

and creativity"

  http://en.wikipedia.org/wiki/Scientific_method
  This is basic stuff.



In the cited wikipedia entry, the phrase "Scientific method is not a 
recipe: it requires intelligence, imagination, and creativity" is immediately 
followed by just such a recipe for the scientific method

A linearized, pragmatic scheme of the four points above is sometimes 
offered as a guideline for proceeding:[25]

  Yes, but each of those steps is very vague, and cannot be boiled down to a 
series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Marc Walser wrote

>>>
Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking

for a counter-example unscientific?
<<<

Before you ask for counter examples you should *first* give some arguments
which supports your hypothesis. This was my point. If everyone would make
wild hypotheses and ask other scientists to find counter-examples then we
would end up in a explosion of number of hypotheses. But if you would first
show some arguments which support your hypothesis then you give reasons to
the scientific community why it is worth to use some time to think about the
hypothesis. Regarding your example with Darwin: Darwin had gathered signs of
evidence which supports his hypothesis *first* .


>>>
First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is "easy".
<<<

Is this a scientific discussion from you? No. You use rhetoric and nothing
else.
I don't say that anything is easy. 

>>>
Second, my hypothesis is more correctly stated that the pre-requisites for a

natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.
<<<

This is the first time you speak about pre-requisites. Your whole hypothesis
changes with these pre-requisites. If you would be scientific you would
qualify what are your pre-requisites.

>>>
So, for simplicity, why don't we just say
scientist = understanding
<<<

Understanding does not imply the ability to create something new or to apply
knowledge. 
Furthermore natural language understanding does not imply understanding
*general* domains. There is much evidence that the ability to understand
natural language does not imply to the understanding of mathematics. Not to
speak from the ability to create mathematics.

>>>
Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which

equals scientist ;-).
<<<

Understanding does not equal scientist. 
The claim that natural language understanding needs understanding is
trivial. This wasn't your initial hypothesis.






- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

> You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

>Hence the wiki entry on scientific method:
>"Scientific method is not a recipe: it requires intelligence, >imagination,
and creativity"
>http://en.wikipedia.org/wiki/Scientific_method
>This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Ben Goertzel
On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

> Oh, and I *have* to laugh . . . .
>
>  Hence the wiki entry on scientific method:
>> "Scientific method is not a recipe: it requires intelligence,
>> >imagination,
>>
> and creativity"
>
>> http://en.wikipedia.org/wiki/Scientific_method
>> This is basic stuff.
>>
>
> In the cited wikipedia entry, the phrase "Scientific method is not a
> recipe: it requires intelligence, imagination, and creativity" is
> immediately followed by just such a recipe for the scientific method
>
> A linearized, pragmatic scheme of the four points above is sometimes
> offered as a guideline for proceeding:[25]


Yes, but each of those steps is very vague, and cannot be boiled down to a
series of precise instructions sufficient for a stupid person to
consistently carry them out effectively...

Also, those steps are heuristic and do not cover all cases.  For instance
step 4 requires experimentation, yet there are sciences such as cosmology
and paleontology that are not focused on experimentation.

As you asked for references I will give you two:

Paul Feyerabend, Against Method (a polemic I don't fully agree with, but his
points need to be understood by those who will talk about scientific method)

Imre Lakatos, The Methodology of Scientific Research Programmes (which I do
largely agree with ... he's a very subtle thinker...)



ben g


>  1.. Define the question
>  2.. Gather information and resources (observe)
>  3.. Form hypothesis
>  4.. Perform experiment and collect data
>  5.. Analyze data
>  6.. Interpret data and draw conclusions that serve as a starting point for
> new hypothesis
>  7.. Publish results
>  8.. Retest (frequently done by other scientists)
>
>
>
> ----- Original Message - From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>
> To: 
> Sent: Monday, October 20, 2008 5:00 PM
> Subject: AW: AW: AW: [agi] Re: Defining AGI
>
>
> If MW would be scientific then he would not have asked Ben to prove that
> MWs
> hypothesis is wrong.
> The person who has to prove something is the person who creates the
> hypothesis.
> And MW has given not a tiny argument for his hypothesis that a natural
> language understanding system can easily be a scientist.
>
> -Matthias
>
> -Ursprüngliche Nachricht-
> Von: Eric Burton [mailto:[EMAIL PROTECTED]
> Gesendet: Montag, 20. Oktober 2008 22:48
> An: agi@v2.listbox.com
> Betreff: Re: AW: AW: [agi] Re: Defining AGI
>
>
>  You and MW are clearly as philosophically ignorant, as I am in AI.
>>
>
> But MW and I have not agreed on anything.
>
>  Hence the wiki entry on scientific method:
>> "Scientific method is not a recipe: it requires intelligence,
>> >imagination,
>>
> and creativity"
>
>> http://en.wikipedia.org/wiki/Scientific_method
>> This is basic stuff.
>>
>
> And this is fundamentally what I was trying to say.
>
> I don't think of myself as "philosophically ignorant". I believe
> you've reversed the intention of my post. It's probably my fault for
> choosing my words poorly. I could have conveyed the nuances of the
> argument better as I understood them. Next time!
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Oh, and I *have* to laugh . . . .


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


In the cited wikipedia entry, the phrase "Scientific method is not a recipe: 
it requires intelligence, imagination, and creativity" is immediately 
followed by just such a recipe for the scientific method


A linearized, pragmatic scheme of the four points above is sometimes offered 
as a guideline for proceeding:[25]

 1.. Define the question
 2.. Gather information and resources (observe)
 3.. Form hypothesis
 4.. Perform experiment and collect data
 5.. Analyze data
 6.. Interpret data and draw conclusions that serve as a starting point for 
new hypothesis

 7.. Publish results
 8.. Retest (frequently done by other scientists)



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Eric Burton
> My apologies if I've misconstrued you. Regardless of any fault, the "basic"
> point was/is important. Even if a considerable percentage of science's
> conclusions are v. hard, there is no definitive scientific method for
> reaching them .

I think I understand.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
If MW would be scientific then he would not have asked Ben to prove that 
MWs hypothesis is wrong.


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking 
for a counter-example unscientific?



The person who has to prove something is the person who creates the 
hypothesis.


Ah.  Like the theory of evolution is conclusively proved?  The scientific 
method is about predictive power not proof.  Try reading the reference that 
I gave Ben.  (And if you've got something to prove, maybe the scientific 
method isn't so good for you.  :-)



And MW has given not a tiny argument for his hypothesis that a natural 
language understanding system can easily be a scientist.


First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is "easy".


Second, my hypothesis is more correctly stated that the pre-requisites for a 
natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


Third, while I haven't given a tiny argument, I have given a reasonably 
short logical chain which I'll attempt to rephrase yet again.


Science is all about modeling the world and predicting future data.
The scientific method simply boils down to making a theory (of how to change 
or enhance your world model) and seeing if it is supported (not proved!) or 
disproved by future data.
Ben's and my disagreement initially came down to whether a scientist was an 
Einstein (his view) or merely capable of competently reviewing data to see 
if it supports, disproves, or isn't relevant to the predictive power of a 
theory (my view).
Later, he argued that most humans aren't even competent to review data and 
can't be made competent.
I agreed with his assessment that many scientists don't competently review 
data (inappropriate over-reliance on the heuristic p < 0.5 without 
understanding what it truly means) but disagreed as to whether the average 
human could be *taught*
Ben's argument was that the scientific method couldn't be codified well 
enough to be taught.  My argument was that the method was codified 
sufficiently but that the application of the method was clearly context 
dependent and could be unboundedly complex.


But this is actually a distraction from some more important arguments . . . 
.
The $1,000,000 question is "If a human can't be taught something, is that 
human a general intelligence?"
The $5,000,000 question is "If a human can't competently follow a recipe in 
a cookbook, do they have natural language understanding?"


Fundamentally, this either comes down to a disagreement about what a general 
intelligence is and/or what understanding and meaning are.
Currently, I'm using the definition that a general intelligence is one that 
can achieve competence in any domain in a reasonable length of time.

To achieve competence in a domain, you have to "understand" that domain
My definition of understanding is that you have a mental model of that 
domain that has predictive power in that domain and which you can update as 
you learn about that domain.

(You could argue with this definition if you like)
Or, in other words, you have to be a competent scientist in that domain --  
or else, you don't truly "understand" that domain


So, for simplicity, why don't we just say
   scientist = understanding

Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which 
equals scientist ;-).





----- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-----Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
"Scientific method is not a recipe: it requires intelligence, >imagination,

and creativity"

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to s

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mike Tintner


Eric: I could have conveyed the nuances of the

argument better as I understood them.



Eric,

My apologies if I've misconstrued you. Regardless of any fault, the "basic" 
point was/is important. Even if a considerable percentage of science's 
conclusions are v. hard, there is no definitive scientific method for 
reaching them . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger


>>>

A conceptual framework starts with knowledge representation. Thus a symbol S 
refers to a persistent pattern P which is, in some way or another, a reflection 
of the agent's environment and/or a composition of other symbols. Symbols are 
related to each other in various ways. These relations (such as, "is a property 
of", "contains", "is associated with") are either given or emerge in some kind 
of self-organizing dynamic.

A causal model M is a set of symbols such that the activation of symbols 
S1...Sn are used to infer the future activation of symbol S'. The rules of 
inference are either given or emerge in some kind of self-organizing dynamic.

A conceptual framework refers to the whole set of symbols and their relations, 
which includes all causal models and rules of inference.

Such a framework is necessary for language comprehension because meaning is 
grounded in that framework. For example, the word 'flies' has at least two 
totally distinct meanings, and each is unambiguously evoked only when given the 
appropriate conceptual context, as in the classic example "time flies like an 
arrow; fruit flies like a banana."  "time" and "fruit" have very different sets 
of relations to other patterns, and these relations can in principle be 
employed to disambiguate the intended meaning of "flies" and "like".

If you think language comprehension is possible with just statistical methods, 
perhaps you can show how they would work to disambiguate the above example.
<<<



I agree with your framework but it is in my approach a part of nonlinguistic D 
which is separated from L. D and L interact only during the process of 
translation but even in this process D and L are separated.




>>>
OK, let's look at all 3 cases:

1. Primitive language *causes* reduced abstraction faculties
2. Reduced abstraction faculties *causes* primitive language
3. Primitive language and reduced abstraction faculties are merely correlated; 
neither strictly causes the other

I've been arguing for (1), saying that language and intelligence are 
inseparable (for social intelligences). The sophistication of one's language 
bounds the sophistication of one's conceptual framework. 

In (2), one must be saying with the Piraha that they are cognitively deficient 
for another reason, and their language is primitive as a result of that 
deficiency. Professor Daniel Everett, the anthropological linguist who first 
described the Piraha grammar, dismissed this possibility in his paper "Cultural 
Constraints on Grammar and Cognition in Piraha˜" (see 
http://www.eva.mpg.de/psycho/pdf/Publications_2005_PDF/Commentary_on_D.Everett_05.pdf):

"... [the idea that] the Piraha˜ are sub-
standard mentally—is easily disposed of. The source
of this collective conceptual deficit could only be ge-
netics, health, or culture. Genetics can be ruled out
because the Piraha˜ people (according to my own ob-
servations and Nimuendajú’s have long intermarried
with outsiders. In fact, they have intermarried to the
extent that no well-defined phenotype other than stat-
ure can be identified. Piraha˜s also enjoy a good and
varied diet of fish, game, nuts, legumes, and fruits, so
there seems to be no dietary basis for any inferiority.
We are left, then, with culture, and here my argument
is exactly that their grammatical differences derive
from cultural values. I am not, however, making a
claim about Piraha˜ conceptual abilities but about their
expression of certain concepts linguistically, and this
is a crucial difference."

This quote thus also addresses (3), that the language and the conceptual 
deficiency are merely correlated. Everett seems to be arguing for this point, 
that their language and conceptual abilities are both held back by their 
culture. There are questions about the dynamic between culture and language, 
but that's all speculative.

I realize this leaves the issue unresolved. I include it because I raised the 
Piraha example and it would be disingenuous of me to not mention Everett's 
interpretation.

<<<

Everett's interpretation is that culture is responsible for reduced abstraction 
facilities. I agree with this. But this does not imply your claim (1) that 
language causes the reduced facilities. The reduced number of cultural 
experiences in which abstraction is important is responsible for the reduced 
abstraction facilities.

 

>>>
Of course, but our opinions have consequences, and in debating the consequences 
we may arrive at a situation in which one of our positions appears absurd, 
contradictory, or totally improbable. That is why we debate about what is 
ultimately speculative, because sometimes we can show the falsehood of a 
position without empirical facts.

On to your example. The ability to do algebra is hardly a test of general 
intelligence, as software like Mathematica can do it. One could say that the 
ability to be *taught* how to do algebra reflects general intelligence, but 
again, that involves learning the *language

Re: AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Terren Suydam

--- On Mon, 10/20/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> Conceptual framework is not well defined. Therefore I
> can't agree or
> disagree.
> What do you mean with causal model?

A conceptual framework starts with knowledge representation. Thus a symbol S 
refers to a persistent pattern P which is, in some way or another, a reflection 
of the agent's environment and/or a composition of other symbols. Symbols are 
related to each other in various ways. These relations (such as, "is a property 
of", "contains", "is associated with") are either given or emerge in some kind 
of self-organizing dynamic.

A causal model M is a set of symbols such that the activation of symbols 
S1...Sn are used to infer the future activation of symbol S'. The rules of 
inference are either given or emerge in some kind of self-organizing dynamic.

A conceptual framework refers to the whole set of symbols and their relations, 
which includes all causal models and rules of inference.

Such a framework is necessary for language comprehension because meaning is 
grounded in that framework. For example, the word 'flies' has at least two 
totally distinct meanings, and each is unambiguously evoked only when given the 
appropriate conceptual context, as in the classic example "time flies like an 
arrow; fruit flies like a banana."  "time" and "fruit" have very different sets 
of relations to other patterns, and these relations can in principle be 
employed to disambiguate the intended meaning of "flies" and "like".

If you think language comprehension is possible with just statistical methods, 
perhaps you can show how they would work to disambiguate the above example.

> In this example we observe two phenomena:
> 1. primitive language compared to all modern languages
> 2. and as a people they exhibit barely any of the hallmarks
> of abstract
> reasoning
> 
> From this we can neither conclude that 1 causes 2 nor that
> 2 causes 1.

OK, let's look at all 3 cases:

1. Primitive language *causes* reduced abstraction faculties
2. Reduced abstraction faculties *causes* primitive language
3. Primitive language and reduced abstraction faculties are merely correlated; 
neither strictly causes the other

I've been arguing for (1), saying that language and intelligence are 
inseparable (for social intelligences). The sophistication of one's language 
bounds the sophistication of one's conceptual framework. 

In (2), one must be saying with the Piraha that they are cognitively deficient 
for another reason, and their language is primitive as a result of that 
deficiency. Professor Daniel Everett, the anthropological linguist who first 
described the Piraha grammar, dismissed this possibility in his paper "Cultural 
Constraints on Grammar and Cognition in Piraha˜" (see 
http://www.eva.mpg.de/psycho/pdf/Publications_2005_PDF/Commentary_on_D.Everett_05.pdf):

"... [the idea that] the Piraha˜ are sub-
standard mentally—is easily disposed of. The source
of this collective conceptual deficit could only be ge-
netics, health, or culture. Genetics can be ruled out
because the Piraha˜ people (according to my own ob-
servations and Nimuendajú’s have long intermarried
with outsiders. In fact, they have intermarried to the
extent that no well-defined phenotype other than stat-
ure can be identified. Piraha˜s also enjoy a good and
varied diet of fish, game, nuts, legumes, and fruits, so
there seems to be no dietary basis for any inferiority.
We are left, then, with culture, and here my argument
is exactly that their grammatical differences derive
from cultural values. I am not, however, making a
claim about Piraha˜ conceptual abilities but about their
expression of certain concepts linguistically, and this
is a crucial difference."

This quote thus also addresses (3), that the language and the conceptual 
deficiency are merely correlated. Everett seems to be arguing for this point, 
that their language and conceptual abilities are both held back by their 
culture. There are questions about the dynamic between culture and language, 
but that's all speculative.

I realize this leaves the issue unresolved. I include it because I raised the 
Piraha example and it would be disingenuous of me to not mention Everett's 
interpretation.

> >>>
> I'm saying that if an AI understands & speaks
> natural language, you've
> solved AGI - your Nobel will be arriving soon.  
> <<<
> 
> This is just your opinion. I disagree that natural language
> understanding
> necessarily implies AGI. For instance, I doubt that anyone
> can prove that
> any system which understands natural language is
> necessarily able to solve
> the simple equation x *3 = y for a given y.
> And if this is not proven then we shouldn't assume that
> natural language
> understanding without hidden further assumptions implies
> AGI.

Of course, but our opinions have consequences, and in debating the consequences 
we may arrive at a situation in which one of our positions appears absurd, 
contrad

Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread David Hart
On Tue, Oct 21, 2008 at 12:56 AM, Dr. Matthias Heger <[EMAIL PROTECTED]>wrote:

>  Any argument of the kind "you should better first read xxx + yyy +… " is
> very weak. It is a pseudo killer argument against everything with no content
> at all.
>
> If  xxx , yyy … contains  really relevant information for the discussion
> then it should be possible to quote the essential part with few lines of
> text.
>
> If someone is not able to do this he should himself better read xxx, yyy, …
> once again.
>
>
I disagree. Books and papers are places to make complex multi-part
arguments. Dragging out those arguments through a series of email-based
soundbites in many cases will not help someone to grok the higher levels of
those arguments, and will constantly miss out on smaller points that fuel
countless unecessary misunderstandings. We witness these problems and others
(practically daily) on the AGI list.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

> You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

>Hence the wiki entry on scientific method:
>"Scientific method is not a recipe: it requires intelligence, >imagination,
and creativity"
>http://en.wikipedia.org/wiki/Scientific_method
>This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
> I could have conveyed the nuances of the
> argument better as I understood them.

s/as I/inasmuch as I/

,_,


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
> You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

>Hence the wiki entry on scientific method:
>"Scientific method is not a recipe: it requires intelligence, >imagination, 
>and creativity"
>http://en.wikipedia.org/wiki/Scientific_method
>This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as "philosophically ignorant". I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mike Tintner


Eric:

"Ben Goertzel says that there is no true defined method
to the scientific method (and Mark Waser is clueless for thinking that 
there

is)."


This is pretty profound. I never saw Ben Goertzel abolish the
scientific method. I think he explained that its implementation is
intractable, with reference to expert systems whose domain knowledge
necessarily extrapolates massively to cover fringe cases. A strong AI
would produce its own expert system and could follow the same general
scientific method as a human. Can you quote the claim that there is no
such thing


Eric,

You and MW are clearly as philosophically ignorant, as I am in AI. The 
reason there is an extensive discipline called philosophy of science, (as 
with every other branch of knowledge), is that there are conflicting 
opinions and arguments about virtually every aspect of science.


Yes, there is a very broad consensus that science - the scientific method - 
generally involves a reliance on evidence, experiment and measurement  But 
exactly what constitutes evidence, and how much is required, and what 
constitutes experiment, either generally or in any particular field, and 
what form theories should take, are open to, and receiving, endless 
discussion. Plus new kinds of all of these are continually being invented.


Hence the wiki entry on scientific method:

"Scientific method is not a recipe: it requires intelligence, imagination, 
and creativity"


http://en.wikipedia.org/wiki/Scientific_method

This is basic stuff.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
On Mon, Oct 20, 2008 at 4:04 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

> > "Ben Goertzel says that there is no true defined method
> > to the scientific method (and Mark Waser is clueless for thinking that
> there
> > is)."
>


That is not what I said.

My views on the philosophy of science are given here:

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

with an addition here

http://multiverseaccordingtoben.blogspot.com/2008/10/reflections-on-religulous-and.html

The argument with Mark* *was about his claim that a below-intelligence human
could be trained to be a good scientist ... then modified to the claim that
a below-intelligence human could be trained to be good at evaluating (rather
than discovering) scientific results.  I said I doubted this was true.
*
*I still doubt it's true.  Given the current state of scientific
experimental and statistical tools, and scientific theory, I don't think a
below-average-intelligence person can be trained to be good (as opposed to,
say, barely passable) at discovering or evaluating scientific results.*   *This
is because I don't think the scientific method as currently practiced has
been formalized fully enough that it can be practiced by a person without a
fair amount of intelligence and common sense.

My feeling is that, if someone needs to use a cash register with little
pictures of burgers and fries on it rather than numbers, it's probably not
going to work out to teach them to effectively discover or evaluate
scientific theories.*

*Again, I don't understand what this argument has to do with AGI in the
first place.  I'm just continuing this dialogue to avoid having my
statements publicly misrepresented (I'm sure this misrepresentation is
inadvertent, but still).*
*

>
>
> This is pretty profound. I never saw Ben Goertzel abolish the
> scientific method. I think he explained that its implementation is
> intractable, with reference to expert systems whose domain knowledge
> necessarily extrapolates massively to cover fringe cases. A strong AI
> would produce its own expert system and could follow the same general
> scientific method as a human. Can you quote the claim that there is no
> such thing
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Eric Burton
> "Ben Goertzel says that there is no true defined method
> to the scientific method (and Mark Waser is clueless for thinking that there
> is)."

This is pretty profound. I never saw Ben Goertzel abolish the
scientific method. I think he explained that its implementation is
intractable, with reference to expert systems whose domain knowledge
necessarily extrapolates massively to cover fringe cases. A strong AI
would produce its own expert system and could follow the same general
scientific method as a human. Can you quote the claim that there is no
such thing


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Terren wrote

>>>
Language understanding requires a sophisticated conceptual framework
complete with causal models, because, whatever "meaning" means, it must be
captured somehow in an AI's internal models of the world.
<<<

Conceptual framework is not well defined. Therefore I can't agree or
disagree.
What do you mean with causal model?


>>>
The Piraha tribe in the Amazon basin has a very primitive language compared
to all modern languages - it has no past or future tenses, for example - and
as a people they exhibit barely any of the hallmarks of abstract reasoning
that are so common to the rest of humanity, such as story-telling, artwork,
religion... see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 


How do you explain that?
<<<

In this example we observe two phenomena:
1. primitive language compared to all modern languages
2. and as a people they exhibit barely any of the hallmarks of abstract
reasoning

>From this we can neither conclude that 1 causes 2 nor that 2 causes 1.


>>>
I'm saying that if an AI understands & speaks natural language, you've
solved AGI - your Nobel will be arriving soon.  
<<<

This is just your opinion. I disagree that natural language understanding
necessarily implies AGI. For instance, I doubt that anyone can prove that
any system which understands natural language is necessarily able to solve
the simple equation x *3 = y for a given y.
And if this is not proven then we shouldn't assume that natural language
understanding without hidden further assumptions implies AGI.


>>>
The difference between AI1 that understands Einstein, and any AI currently
in existence, is much greater then the difference between AI1 and Einstein.
<<<

This might be true but what does this  show?



>>>
Sorry, I don't see that, can you explain the proof?  Are you saying that
sign language isn't natural language?  That would be patently false. (see
http://crl.ucsd.edu/signlanguage/)
<<<

Yes. In my opinion, sign language is no natural language as it is usually
understood.



>>>
So you're agreeing that language is necessary for self-reflectivity. In your
models, then, self-reflectivity is not important to AGI, since you say AGI
can be realized without language, correct?
<<<

No. Self-reflectifity needs just a feedback loop for  own processes. I do
not say that AGI can be realized without language. AGI must produce outputs
and AGI must obtain inputs. For inputs and outputs there must be protocols.
These protocols are not fixed but  depend on the input devices on output
devices. For instance the AGI could use the hubble telescope or a microscope
or both. 
For the domain of mathematics a formal language which is specified by humans
would be 
the best for input and output. 

>>>
I'm not saying that language is inherently involved in thinking, but it is
crucial for the development of *sophisticated* causal models of the world -
the kind of models that can support self-reflectivity. Word-concepts form
the basis of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that
emerges is not necessarily tied to linguistics, especially as humans get
feedback from the world in ways that are not linguistic (scientific
experimentation/tinkering, studying math, art, music, etc).
<<<

That is just your opinion again. I tolerate your opinion. But I have a
different opinion. The future will show which approach is successful.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Terren Suydam

Matthias, still awaiting a response to this post, quoted below.

Thanks,
Terren


Matthias wrote:
> I don't think that learning of language is the entire
> point. If I have only
> learned language I still cannot create anything. A human
> who can understand
> language is by far still no good scientist. Intelligence
> means the ability
> to solve problems. Which problems can a system solve if it
> can nothing else
> than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever "meaning" means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 

How do you explain that?

> >Einstein had to express his (non-linguistic) internal
> insights in natural
> language >and in mathematical language.  In both
> modalities he had to use
> his intelligence to >make the translation from his
> mental models.
>
> The point is that someone else could understand Einstein
> even if he haven't
> had the same intelligence. This is a proof that
> understanding AI1 does not
> necessarily imply to have the intelligence of AI1.

I'm saying that if an AI understands & speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

> >Deaf people speak in sign language, which is only
> different from spoken
> language in >superficial ways. This does not tell us
> much about language
> that we didn't already >know.
>
> But it is a proof that *natural* language understanding is
> not necessary for
> human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

> I have already outlined the process of self-reflectivity:
> Internal patterns
> are translated into language.

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

> This is routed to the
> brain's own input
> regions. You *hear* your own thoughts and have the illusion
> that you think
> linguistically.
> If you can speak two languages then you can make an easy
> test: Try to think
> in the foreign language. It works. If language would be
> inherently involved
> in the process of thoughts then thinking alternatively in
> two languages
> would cost many resources of the brain. In fact you need
> just use the other
> module for language translation. This is a big hint that
> language and
> thoughts do not have much in common.
>
> -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc).

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Any argument of the kind "you should better first read xxx + yyy +. " is
very weak. It is a pseudo killer argument against everything with no content
at all.

If  xxx , yyy . contains  really relevant information for the discussion
then it should be possible to quote the essential part with few lines of
text.

If someone is not able to do this he should himself better read xxx, yyy, .
once again.

 

-Matthias

 

 

Ben wrote

 

It would also be nice if this mailing list could be operate on a bit more of
a scientific basis.  I get really tired of pointing to specific references
and then being told that I have no facts or that it was solely my opinion.

 


This really has to do with the culture of the community on the list, rather
than the "operation" of the list per se, I'd say.

I have also often been frustrated by the lack of inclination of some list
members to read the relevant literature.  Admittedly, there is a lot of it
to read.  But on the other hand, it's not reasonable to expect folks who
*have* read a certain subset of the literature, to summarize that subset in
emails for individuals who haven't taken the time.  Creating such summaries
carefully takes a lot of effort.

I agree that if more careful attention were paid to the known science
related to AGI ... and to the long history of prior discussions on the
issues discussed here ... this list would be a lot more useful.

But, this is not a structured discussion setting -- it's an Internet
discussion group, and even if I had the inclination to moderate more
carefully so as to try to encourage a more carefully scientific mode of
discussion, I wouldn't have the time...

ben g



  _  


agi |   Archives
 |
 Modify Your Subscription

  

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mark Waser
There is a wide area between moderation and complete laissez-faire.

Also, as list owner, people tend to pay attention to what you say/request and 
also what you do.

If you regularly point to references and ask others to do the same, they are 
likely to follow.  If you were to gently chastise people for saying that there 
are no facts when references were provided, people might get the hint.  
Instead, you generally feed the trolls and "humorously" insult the people who 
are trying to keep it on a scientific basis.  That's a pretty clear message all 
by itself.

You don't need to spend more time but, as a serious role model for many of the 
people on the list, you do need to pay attention to the effects of what you say 
and do.  I can't help but go back to my perceived summary of the most recent 
issue -- "Ben Goertzel says that there is no true defined method to the 
scientific method (and Mark Waser is clueless for thinking that there is)."


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 20, 2008 6:53 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





It would also be nice if this mailing list could be operate on a bit more 
of a scientific basis.  I get really tired of pointing to specific references 
and then being told that I have no facts or that it was solely my opinion.



  This really has to do with the culture of the community on the list, rather 
than the "operation" of the list per se, I'd say.

  I have also often been frustrated by the lack of inclination of some list 
members to read the relevant literature.  Admittedly, there is a lot of it to 
read.  But on the other hand, it's not reasonable to expect folks who *have* 
read a certain subset of the literature, to summarize that subset in emails for 
individuals who haven't taken the time.  Creating such summaries carefully 
takes a lot of effort.

  I agree that if more careful attention were paid to the known science related 
to AGI ... and to the long history of prior discussions on the issues discussed 
here ... this list would be a lot more useful.

  But, this is not a structured discussion setting -- it's an Internet 
discussion group, and even if I had the inclination to moderate more carefully 
so as to try to encourage a more carefully scientific mode of discussion, I 
wouldn't have the time...

  ben g




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Ben Goertzel
It would also be nice if this mailing list could be operate on a bit more of
> a scientific basis.  I get really tired of pointing to specific references
> and then being told that I have no facts or that it was solely my opinion.
>
>
This really has to do with the culture of the community on the list, rather
than the "operation" of the list per se, I'd say.

I have also often been frustrated by the lack of inclination of some list
members to read the relevant literature.  Admittedly, there is a lot of it
to read.  But on the other hand, it's not reasonable to expect folks who
*have* read a certain subset of the literature, to summarize that subset in
emails for individuals who haven't taken the time.  Creating such summaries
carefully takes a lot of effort.

I agree that if more careful attention were paid to the known science
related to AGI ... and to the long history of prior discussions on the
issues discussed here ... this list would be a lot more useful.

But, this is not a structured discussion setting -- it's an Internet
discussion group, and even if I had the inclination to moderate more
carefully so as to try to encourage a more carefully scientific mode of
discussion, I wouldn't have the time...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
OK, well, I'm not going to formally kill this irrelevant-to-AGI thread as
moderator, but I'm going to abandon it as participant...

Time to get some work done tonight, enough time spent on email ;-p

ben g

On Sun, Oct 19, 2008 at 7:52 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

> No, surely this is mostly outside the purview of the AGI list. I'm
> reading some of this material and not getting a lot out of it. There
> are channels on freenode for this stuff. But we have got to agree on
> something if we are going to do anything. Can animals do science? They
> can not.
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Eric Burton
No, surely this is mostly outside the purview of the AGI list. I'm
reading some of this material and not getting a lot out of it. There
are channels on freenode for this stuff. But we have got to agree on
something if we are going to do anything. Can animals do science? They
can not.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
> And why don't we keep this on the level of scientific debate rather than
> arguing insults and vehemence and confidence?  That's not particularly good
> science either.
>


Right ... being unnecessarily nasty is not either good or bad science, it's
just irritating for others to deal with

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Mark, I did not say that "theory should trump data."  When theory should
trump data is a very complex question.

I don't mind reading the book you suggested eventually but I have a long
list of other stuff to read that seems to have higher priority.

I don't believe there exists a complete, well-defined set of rules for
evaluating scientific evidence in real-world cases, sorry.

If you want to say "there is a complete set of rules, but there is no
complete set of rules for how to apply these rules" -- well, I still doubt
it, but it seems like a less absurd contention.  But in that case, so what?
In that case the rules don't actually tell you how to evaluate scientific
evidence.

In bioinformatics, it seems to me that evaluating complex datasets gets into
tricky questions of applied statistics, on which expert biostatisticians
don't always agree (and they write papers about their arguments).  Clearly
this is not a pursuit for dumbheads ;-p ... Perhaps you would classify this
as a dispute about "how to apply the rules" and not about the rules
themselves?  I don't really understand the distinction you're drawing
there...

ben g

On Sun, Oct 19, 2008 at 7:15 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  >> It is really not true that there is a set of simple rules adequate to
> tell people how to evaluate scientific results effectively.
>
> Get the book and then speak from a position of knowledge by telling
> me something that you believe it is missing.  When I cite a specific example
> that you can go and verify or disprove, it is not an opinion but a valid
> data point (and your perception of my vehemence and/or confidence and your
> personal reaction to it are totally irrelevant).  The fact that you can
> make a statement like this from a position of total ignorance when I cite a
> specific example is a clear example of not following basic scientific
> principles.  You can be insulted all you like but that is not what a good
> scientist would do on a good day -- it is simply lazy and bad science.
>
> >> As often occurs, there may be rules that tell you how to handle 80% of
> cases (or whatever), but then the remainder of the cases are harder and
> require actual judgment.
> Is it that the rules don't have 100% coverage or is that it isn't always
> clear how to appropriately apply the rules and that is where the questions
> come in?  There is a huge difference between the two cases -- and your
> statement "no one knows what the rules are" argues for the former not the
> latter.  I'd be more than willing to accept the latter -- but the former is
> an embarrassment.  Do you really mean to contend the former?
>
> >> It is possible I inaccurately remembered an anecdote from Feynman's
> book, but that's irrelevant to my point.
> No, you accurately remembered the anecdote.  As I recall, Feynman was
> expressing frustration at the slowness of the process -- particularly
> because no one would consider his hypothesis enough to perform the
> experiments necessary to determine whether the point was an outlier or not.
> Not performing the experiment was an unfortunate choice of trade-offs (since
> I'm sure that they were doing something else that they deemed more likely to
> produce worthwhile results) but accepting his theory without first proving
> that the outlier was indeed an outlier (regardless of his
> "intelligence") would have been far worse and directly contrary to the
> scientific method.
>
> >>>> Using that story as an example shows that you don't understand how to
> properly run a scientific evaluative process.
> >> Wow, that is quite an insult.  So you're calling me an incompetent in my
> profession now.
>
> It depends.  Are you going to continue promoting something as inexcusable
> as saying that theory should trump data (because of the source of the
> theory)?  I was quite clear that I was criticizing a very specific action.
> Are you going to continue to defend that improper action?
>
> And why don't we keep this on the level of scientific debate rather than
> arguing insults and vehemence and confidence?  That's not particularly good
> science either.
>
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Sunday, October 19, 2008 6:31 PM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
> Sorry Mark, but I'm not going to accept your opinion on this just because
> you express it with vehemence and confidence.
>
> I didn't argue much previously when you told me I didn't understand
> engineering ... because, although I've worked with a lot of engineers, I
> ha

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
>> It is really not true that there is a set of simple rules adequate to tell 
>> people how to evaluate scientific results effectively.

Get the book and then speak from a position of knowledge by telling me 
something that you believe it is missing.  When I cite a specific example that 
you can go and verify or disprove, it is not an opinion but a valid data point 
(and your perception of my vehemence and/or confidence and your personal 
reaction to it are totally irrelevant).  The fact that you can make a statement 
like this from a position of total ignorance when I cite a specific example is 
a clear example of not following basic scientific principles.  You can be 
insulted all you like but that is not what a good scientist would do on a good 
day -- it is simply lazy and bad science.

>> As often occurs, there may be rules that tell you how to handle 80% of cases 
>> (or whatever), but then the remainder of the cases are harder and require 
>> actual judgment.

Is it that the rules don't have 100% coverage or is that it isn't always clear 
how to appropriately apply the rules and that is where the questions come in?  
There is a huge difference between the two cases -- and your statement "no one 
knows what the rules are" argues for the former not the latter.  I'd be more 
than willing to accept the latter -- but the former is an embarrassment.  Do 
you really mean to contend the former?

>> It is possible I inaccurately remembered an anecdote from Feynman's book, 
>> but that's irrelevant to my point.

No, you accurately remembered the anecdote.  As I recall, Feynman was 
expressing frustration at the slowness of the process -- particularly because 
no one would consider his hypothesis enough to perform the experiments 
necessary to determine whether the point was an outlier or not.  Not performing 
the experiment was an unfortunate choice of trade-offs (since I'm sure that 
they were doing something else that they deemed more likely to produce 
worthwhile results) but accepting his theory without first proving that the 
outlier was indeed an outlier (regardless of his "intelligence") would have 
been far worse and directly contrary to the scientific method.

>>>> Using that story as an example shows that you don't understand how to 
>>>> properly run a scientific evaluative process.
>> Wow, that is quite an insult.  So you're calling me an incompetent in my 
>> profession now.  

It depends.  Are you going to continue promoting something as inexcusable as 
saying that theory should trump data (because of the source of the theory)?  I 
was quite clear that I was criticizing a very specific action.  Are you going 
to continue to defend that improper action?  

And why don't we keep this on the level of scientific debate rather than 
arguing insults and vehemence and confidence?  That's not particularly good 
science either.  

  ----- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 6:31 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Sorry Mark, but I'm not going to accept your opinion on this just because you 
express it with vehemence and confidence.

  I didn't argue much previously when you told me I didn't understand 
engineering ... because, although I've worked with a lot of engineers, I 
haven't been one.

  But, I grew up around scientists, I've trained scientists, and I am currently 
(among other things) working as a scientist.

  It is really not true that there is a set of simple rules adequate to tell 
people how to evaluate scientific results effectively.  As often occurs, there 
may be rules that tell you how to handle 80% of cases (or whatever), but then 
the remainder of the cases are harder and require actual judgment.

  This is, by the way, the case with essentially every complex human process 
that people have sought to cover via "expert rules."  The rules cover many 
cases ... but as one seeks to extend them to cover all relevant cases, one 
winds up adding more and more and more specialized rules...

  It is possible I inaccurately remembered an anecdote from Feynman's book, but 
that's irrelevant to my point.

  ***
  Using that story as an example shows that you don't understand how to 
properly run a scientific evaluative process.
  ***

  Wow, that is quite an insult.  So you're calling me an incompetent in my 
profession now.  

I don't have particularly "thin skin", but I have to say that I'm getting 
really tired of being attacked and insulted on this email list. 

  -- Ben G



  On Sun, Oct 19, 2008 at 6:18 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Whether a stupid person can do good scientific evaluation "if taught the 
rules" is a badly-formed

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Sorry Mark, but I'm not going to accept your opinion on this just because
you express it with vehemence and confidence.

I didn't argue much previously when you told me I didn't understand
engineering ... because, although I've worked with a lot of engineers, I
haven't been one.

But, I grew up around scientists, I've trained scientists, and I am
currently (among other things) working as a scientist.

It is really not true that there is a set of simple rules adequate to tell
people how to evaluate scientific results effectively.  As often occurs,
there may be rules that tell you how to handle 80% of cases (or whatever),
but then the remainder of the cases are harder and require actual judgment.

This is, by the way, the case with essentially every complex human process
that people have sought to cover via "expert rules."  The rules cover many
cases ... but as one seeks to extend them to cover all relevant cases, one
winds up adding more and more and more specialized rules...

It is possible I inaccurately remembered an anecdote from Feynman's book,
but that's irrelevant to my point.

***
Using that story as an example shows that you don't understand how to
properly run a scientific evaluative process.
***

Wow, that is quite an insult.  So you're calling me an incompetent in my
profession now.

  I don't have particularly "thin skin", but I have to say that I'm getting
really tired of being attacked and insulted on this email list.

-- Ben G


On Sun, Oct 19, 2008 at 6:18 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  >> Whether a stupid person can do good scientific evaluation "if taught
> the rules" is a badly-formed question, because no one knows what the rules
> are.   They are learned via experience just as much as by explicit teaching
> Wow!  I'm sorry but that is a very scary, incorrect opinion.  There's a
> really good book called "The Game of Science" by McCain and Segal that
> clearly explains all of the rules.  I'll get you a copy.
>
> I understand that most "scientists" aren't trained properly -- but that is
> no reason to continue the problem by claiming that they can't be trained
> properly.
>
> You make my point with your explanation of your example of biology
> referees.  And the Feynman example, if it is the story that I've heard
> before, was actually an example of good science in action because the
> outlier was eventually overruled AFTER ENOUGH GOOD DATA WAS COLLECTED to
> prove that the outlier was truly an outlier and not just a mere
> inconvenience to someone's theory.  Feynman's exceptional intelligence
> allowed him to discover a possibility that might have been correct if the
> point was an outlier, but good scientific evaluation relies on data, data,
> and more data.  Using that story as an example shows that you don't
> understand how to properly run a scientific evaluative process.
>
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Sunday, October 19, 2008 6:07 PM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
> Whether a stupid person can do good scientific evaluation "if taught the
> rules" is a badly-formed question, because no one knows what the rules
> are.   They are learned via experience just as much as by explicit teaching
>
> Furthermore, as anyone who has submitted a lot of science papers to
> journals knows, even smart scientists can be horrendously bad at scientific
> evaluation.  I've had some really good bioscience papers rejected from
> journals, by presumably intelligent referees, for extremely bad reasons (and
> these papers were eventually published in good journals).
>
> Evaluating research is not much easier than doing it.  When is someone's
> supposed test of statistical validity really the right test?  Too many
> biology referees just look for the magic number of p<.05 rather than
> understanding what test actually underlies that number, because they don't
> know the math or don't know how to connect the math to the experiment in a
> contextually appropriate way.
>
> As another example: When should a data point be considered an outlier
> (meaning: probably due to equipment error or some other quirk) rather than a
> genuine part of the data?  Tricky.  I recall Feynman noting that he was held
> back in making a breakthrough discovery for some time, because of an outlier
> on someone else's published data table, which turned out to be spurious but
> had been accepted as valid by the community.  In this case, Feyman's
> exceptional intelligence allowed him to carry out scientific evaluation more
> effectively than 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
>> Whether a stupid person can do good scientific evaluation "if taught the 
>> rules" is a badly-formed question, because no one knows what the rules are.  
>>  They are learned via experience just as much as by explicit teaching

Wow!  I'm sorry but that is a very scary, incorrect opinion.  There's a really 
good book called "The Game of Science" by McCain and Segal that clearly 
explains all of the rules.  I'll get you a copy.

I understand that most "scientists" aren't trained properly -- but that is no 
reason to continue the problem by claiming that they can't be trained properly.

You make my point with your explanation of your example of biology referees.  
And the Feynman example, if it is the story that I've heard before, was 
actually an example of good science in action because the outlier was 
eventually overruled AFTER ENOUGH GOOD DATA WAS COLLECTED to prove that the 
outlier was truly an outlier and not just a mere inconvenience to someone's 
theory.  Feynman's exceptional intelligence allowed him to discover a 
possibility that might have been correct if the point was an outlier, but good 
scientific evaluation relies on data, data, and more data.  Using that story as 
an example shows that you don't understand how to properly run a scientific 
evaluative process.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 6:07 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Whether a stupid person can do good scientific evaluation "if taught the 
rules" is a badly-formed question, because no one knows what the rules are.   
They are learned via experience just as much as by explicit teaching

  Furthermore, as anyone who has submitted a lot of science papers to journals 
knows, even smart scientists can be horrendously bad at scientific evaluation.  
I've had some really good bioscience papers rejected from journals, by 
presumably intelligent referees, for extremely bad reasons (and these papers 
were eventually published in good journals).

  Evaluating research is not much easier than doing it.  When is someone's 
supposed test of statistical validity really the right test?  Too many biology 
referees just look for the magic number of p<.05 rather than understanding what 
test actually underlies that number, because they don't know the math or don't 
know how to connect the math to the experiment in a contextually appropriate 
way.

  As another example: When should a data point be considered an outlier 
(meaning: probably due to equipment error or some other quirk) rather than a 
genuine part of the data?  Tricky.  I recall Feynman noting that he was held 
back in making a breakthrough discovery for some time, because of an outlier on 
someone else's published data table, which turned out to be spurious but had 
been accepted as valid by the community.  In this case, Feyman's exceptional 
intelligence allowed him to carry out scientific evaluation more effectively 
than other, intelligent but less-so-than-him, had done...

  -- Ben G


  On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Actually, I should have drawn a distinction . . . . there is a major 
difference between performing discovery as a scientist and evaluating data as a 
scientist.  I was referring to the latter (which is similar to understanding 
Einstein) as opposed to the former (which is being Einstein).  You clearly are 
referring to the creative act of discovery (Programming is also a discovery 
operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message ----- 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
& programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality 
seems to require a whole bunch of implicit, hard-to-verbalize knowledge that 
stupid people just don't seem to be capable of learning.  A stupid person can 
possibly be trained to be a good lab assistant, in some areas of science but 
not others (it depends on how flaky and how complex the lab technology involved 
is in that area).  But, being a scientist involves a lot of judgment, a lot of 
heuristic, uncertain reasoning draw

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Whether a stupid person can do good scientific evaluation "if taught the
rules" is a badly-formed question, because no one knows what the rules
are.   They are learned via experience just as much as by explicit teaching

Furthermore, as anyone who has submitted a lot of science papers to journals
knows, even smart scientists can be horrendously bad at scientific
evaluation.  I've had some really good bioscience papers rejected from
journals, by presumably intelligent referees, for extremely bad reasons (and
these papers were eventually published in good journals).

Evaluating research is not much easier than doing it.  When is someone's
supposed test of statistical validity really the right test?  Too many
biology referees just look for the magic number of p<.05 rather than
understanding what test actually underlies that number, because they don't
know the math or don't know how to connect the math to the experiment in a
contextually appropriate way.

As another example: When should a data point be considered an outlier
(meaning: probably due to equipment error or some other quirk) rather than a
genuine part of the data?  Tricky.  I recall Feynman noting that he was held
back in making a breakthrough discovery for some time, because of an outlier
on someone else's published data table, which turned out to be spurious but
had been accepted as valid by the community.  In this case, Feyman's
exceptional intelligence allowed him to carry out scientific evaluation more
effectively than other, intelligent but less-so-than-him, had done...

-- Ben G

On Sun, Oct 19, 2008 at 6:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  Actually, I should have drawn a distinction . . . . there is a major
> difference between performing discovery as a scientist and evaluating data
> as a scientist.  I was referring to the latter (which is similar to
> understanding Einstein) as opposed to the former (which is being Einstein).
> You clearly are referring to the creative act of discovery (Programming is
> also a discovery operation).
>
> So let me rephrase my statement -- Can a stupid person do good scientific
> evaluation if taught the rules and willing to abide by them?  Why or why
> not?
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Sunday, October 19, 2008 5:52 PM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
> Mark,
>
> It is not the case that I have merely lectured rather than taught.  I've
> lectured (math, CS, psychology and futurology) at university, it's true ...
> but I've also done extensive one-on-one math tutoring with students at
> various levels ... and I've also taught small groups of kids aged 7-12,
> hands-on (math & programming), and I've taught retirees various skills
> (mostly computer related).
>
> Why can't a stupid person do good science?  Doing science in reality seems
> to require a whole bunch of implicit, hard-to-verbalize knowledge that
> stupid people just don't seem to be capable of learning.  A stupid person
> can possibly be trained to be a good lab assistant, in some areas of science
> but not others (it depends on how flaky and how complex the lab technology
> involved is in that area).  But, being a scientist involves a lot of
> judgment, a lot of heuristic, uncertain reasoning drawing on a wide variety
> of knowledge.
>
> Could a stupid person learn to be a good scientist given, say, a thousand
> years of training?  Maybe.  But I doubt it, because by the time they had
> moved on to learning the second half of what they need to know, they would
> have already forgotten the first half ;-p
>
> You work in software engineering -- do you think a stupid person could be
> trained to be a really good programmer?  Again, I very much doubt it ...
> though they could be (and increasingly are ;-p) trained to do routine
> programming tasks.
>
> Inevitably, in either of these cases, the person will encounter some
> situation not directly covered in their training, and will need to
> intelligently analogize to their experience, and will fail at this because
> they are not very intelligent...
>
> -- Ben G
>
> On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>>  Funny, Ben.
>>
>> So . . . . could you clearly state why science can't be done by anyone
>> willing to simply follow the recipe?
>>
>> Is it really anything other than the fact that they are stopped by their
>> unconscious beliefs and biases?  If so, what?
>>
>> Instead of a snide comment, defend your opinion with facts, explanations,
>> and examples of what it really is.
>>
>> I can give you all sorts of examples

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Actually, I should have drawn a distinction . . . . there is a major difference 
between performing discovery as a scientist and evaluating data as a scientist. 
 I was referring to the latter (which is similar to understanding Einstein) as 
opposed to the former (which is being Einstein).  You clearly are referring to 
the creative act of discovery (Programming is also a discovery operation).

So let me rephrase my statement -- Can a stupid person do good scientific 
evaluation if taught the rules and willing to abide by them?  Why or why not?
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:52 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,

  It is not the case that I have merely lectured rather than taught.  I've 
lectured (math, CS, psychology and futurology) at university, it's true ... but 
I've also done extensive one-on-one math tutoring with students at various 
levels ... and I've also taught small groups of kids aged 7-12, hands-on (math 
& programming), and I've taught retirees various skills (mostly computer 
related).

  Why can't a stupid person do good science?  Doing science in reality seems to 
require a whole bunch of implicit, hard-to-verbalize knowledge that stupid 
people just don't seem to be capable of learning.  A stupid person can possibly 
be trained to be a good lab assistant, in some areas of science but not others 
(it depends on how flaky and how complex the lab technology involved is in that 
area).  But, being a scientist involves a lot of judgment, a lot of heuristic, 
uncertain reasoning drawing on a wide variety of knowledge.

  Could a stupid person learn to be a good scientist given, say, a thousand 
years of training?  Maybe.  But I doubt it, because by the time they had moved 
on to learning the second half of what they need to know, they would have 
already forgotten the first half ;-p

  You work in software engineering -- do you think a stupid person could be 
trained to be a really good programmer?  Again, I very much doubt it ... though 
they could be (and increasingly are ;-p) trained to do routine programming 
tasks.  

  Inevitably, in either of these cases, the person will encounter some 
situation not directly covered in their training, and will need to 
intelligently analogize to their experience, and will fail at this because they 
are not very intelligent...

  -- Ben G


  On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone 
willing to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, 
and examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something "by the numbers" until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed 
to taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
      Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI






>>>
*Any* human who can understand language beyond a certain point (say, 
that of

a slightly sub-average human IQ) can easily be taught to be a good 
scientist

if they are willing to play along.  Science is a rote process that can 
be
learned and executed by anyone -- as long as their beliefs and biases 
don't
get in the way.
<<

  This is obviously spoken by someone who has never been a professional 
teacher ;-p

  ben g


--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser

Interesting how you always only address half my points . . .

I keep hammering extensibility and you focus on ambiguity which is merely 
the result of extensibility.  You refuse to address extensibility.  Maybe 
because it really is the secret sauce of intelligence and the one thing that 
you can't handle?


And after a long explanation, I get comments like "> It is still just 
translation" with no further explanation and "visual thought" nonsense 
worthy of Mike Tintner.


So, I give up.  I can't/won't debate someone who won't follow scientific 
methods of inquiry.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 5:21 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI



Marc Walser wrote:



*Any* human who can understand language beyond a certain point (say, that 
of


a slightly sub-average human IQ) can easily be taught to be a good 
scientist


if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases 
don't

get in the way.
<<<


This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.





This is a bit of disingenuous side-track that I feel that I must address.
When people say "natural language", the important features are 
extensibility


and ambiguity.  If you can handle one extensible and ambiguous language, 
you


should have the capabilities to handle all of them.  It's yet another
definition of GI-complete.  Just look at it as yet another example of
dealing competently with ambiguous and incomplete data (which is, at root,
all that intelligence is).
<<<

You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.




One thought module, two translation modules -- except that all the
translation modules really are is label appliers and grammar re-arrangers.
The heavy lifting is all in the thought module.  The problem is that you 
are


claiming that language lies entirely in the translation modules while I'm
arguing that a large percentage of it is in the thought module.  The fact
that the translation module has to go to the thought module for
disambiguation and interpretation (and numerous other things) should make 
it


quite clear that language is *not* simply translation.
<<<

It is still just translation.




Further, if you read Pinker's book, you will find that languages have a 
lot

more in common than you would expect if language truly were independent of
and separate from thought (as you are claiming).  Language is built on top
of the thinking/cognitive architecture (not beside it and not independent 
of


it) and could not exist without it.  That is why language is AGI-complete.
Language also gives an excellent window into many of the features of that
cognitive architecture and determining what is necessary for language also
determine what is in that cognitive architecture.  Another excellent 
window
is how humans perform moral judgments (try reading Marc Hauser -- either 
his


numerous scientific papers or the excellent Moral Minds).  Or, yet 
another,

is examining the structure of human biases.
<<<

There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: 
There

is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual 
thoughts

and routes the results to its own visual input regions.

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
Mark,

It is not the case that I have merely lectured rather than taught.  I've
lectured (math, CS, psychology and futurology) at university, it's true ...
but I've also done extensive one-on-one math tutoring with students at
various levels ... and I've also taught small groups of kids aged 7-12,
hands-on (math & programming), and I've taught retirees various skills
(mostly computer related).

Why can't a stupid person do good science?  Doing science in reality seems
to require a whole bunch of implicit, hard-to-verbalize knowledge that
stupid people just don't seem to be capable of learning.  A stupid person
can possibly be trained to be a good lab assistant, in some areas of science
but not others (it depends on how flaky and how complex the lab technology
involved is in that area).  But, being a scientist involves a lot of
judgment, a lot of heuristic, uncertain reasoning drawing on a wide variety
of knowledge.

Could a stupid person learn to be a good scientist given, say, a thousand
years of training?  Maybe.  But I doubt it, because by the time they had
moved on to learning the second half of what they need to know, they would
have already forgotten the first half ;-p

You work in software engineering -- do you think a stupid person could be
trained to be a really good programmer?  Again, I very much doubt it ...
though they could be (and increasingly are ;-p) trained to do routine
programming tasks.

Inevitably, in either of these cases, the person will encounter some
situation not directly covered in their training, and will need to
intelligently analogize to their experience, and will fail at this because
they are not very intelligent...

-- Ben G

On Sun, Oct 19, 2008 at 5:43 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>  Funny, Ben.
>
> So . . . . could you clearly state why science can't be done by anyone
> willing to simply follow the recipe?
>
> Is it really anything other than the fact that they are stopped by their
> unconscious beliefs and biases?  If so, what?
>
> Instead of a snide comment, defend your opinion with facts, explanations,
> and examples of what it really is.
>
> I can give you all sorts of examples where someone is capable of doing
> something "by the numbers" until they are told that they can't.
>
> What do you believe is so difficult about science other than overcoming the
> sub/unconscious?
>
> Your statement is obviously spoken by someone who has lectured as opposed
> to taught.
>
> - Original Message -
> *From:* Ben Goertzel <[EMAIL PROTECTED]>
> *To:* agi@v2.listbox.com
> *Sent:* Sunday, October 19, 2008 5:26 PM
> *Subject:* Re: AW: AW: [agi] Re: Defining AGI
>
>
>
>
>> >>>
>> *Any* human who can understand language beyond a certain point (say, that
>> of
>>
>> a slightly sub-average human IQ) can easily be taught to be a good
>> scientist
>>
>> if they are willing to play along.  Science is a rote process that can be
>> learned and executed by anyone -- as long as their beliefs and biases
>> don't
>> get in the way.
>> <<
>>
>
> This is obviously spoken by someone who has never been a professional
> teacher ;-p
>
> ben g
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
> --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Funny, Ben.

So . . . . could you clearly state why science can't be done by anyone willing 
to simply follow the recipe?

Is it really anything other than the fact that they are stopped by their 
unconscious beliefs and biases?  If so, what?

Instead of a snide comment, defend your opinion with facts, explanations, and 
examples of what it really is.

I can give you all sorts of examples where someone is capable of doing 
something "by the numbers" until they are told that they can't.

What do you believe is so difficult about science other than overcoming the 
sub/unconscious?

Your statement is obviously spoken by someone who has lectured as opposed to 
taught.
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:26 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI






>>>
*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be
learned and executed by anyone -- as long as their beliefs and biases don't
get in the way.
<<

  This is obviously spoken by someone who has never been a professional teacher 
;-p

  ben g


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Ben Goertzel
> >>>
> *Any* human who can understand language beyond a certain point (say, that
> of
>
> a slightly sub-average human IQ) can easily be taught to be a good
> scientist
>
> if they are willing to play along.  Science is a rote process that can be
> learned and executed by anyone -- as long as their beliefs and biases don't
> get in the way.
> <<
>

This is obviously spoken by someone who has never been a professional
teacher ;-p

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Marc Walser wrote:

>>>
*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.
<<<


This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.


>>>
This is a bit of disingenuous side-track that I feel that I must address. 
When people say "natural language", the important features are extensibility

and ambiguity.  If you can handle one extensible and ambiguous language, you

should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).
<<<

You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.

>>>
One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are

claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it

quite clear that language is *not* simply translation.
<<<

It is still just translation.


>>>
Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of

it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his

numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.
<<<

There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: There
is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual thoughts
and routes the results to its own visual input regions. 

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Mark Waser wrote:

 

>How is translating patterns into language different from manipulating
patterns? 

> It seems to me that they are *exactly* the same thing.  How do you believe
that they differ?

 

Manipulating of patterns needs reading and writing operations. Data
structures will be changed. Translation needs just reading operations to the
patterns of the internal model.

 

 

>Do you really believe that if A is easier than B then that makes A easy? 

> How about if A is leaping a tall building in a single bound and B is
jumping to the moon?

 

The word *easy*  is not exactly definable.

 

 

> Do you believe that language is fully specified?  That we can program
English into an AGI by hand?

 

No. That’s the reason why I would not use human language for the first AGI.

 

>Yes, I imagine that an AGI must have some process for learning language
because language is necessary for 

>learning knowledge and knowledge is necessary for intelligence.  

>What part of that do you disagree with?  Please be specific.

 

I disagree that AGI must have some process for learning language. If we
concentrate just on the domain of mathematics we could give AGI all the
rules for a sufficient language to express its results and to understand our
questions.

 

 

 

 >>>

>And this is where we are not communicating.  Since language is not fully
specified, then the participants in 

>many conversations are *constantly* creating and learning language as a
part of the process of 

>communication.  This is where Gödel's incompleteness comes in.  To be a
General Intelligence, you must be able to extend beyond what is currently
known and specified into new domains.  Any time that we are teaching or
learning (i.e. modifying our model of the world), we are also necessarily
extending our models of each other and language.  The computer database
analogy you are basing your entire argument upon does not have the necessary
features/complexity to be an accurate or useful analogy.

<<< 

 

Language must only grow if you make new definitions and want to communicate
the definition to another agent. But new definitions are not necessary for
general intelligence. If you define 

Methane := CH4

Then it is your choice whether you say the new word methane or you use the
known expression CH4.

New definitions makes communication more comfortable but they are not
necessary.

 

***Even if you change your model and your language at the same time then
there is still a strict distinction between them.

Language would still only be used for communication and not for the data
structure of the patterns for the world model or the algorithms which
manipulate these patterns.***

 

 

>Again, I disagree.  You added internal details but the end result after the
details are hidden is that e-mail

> programs are just point-to-point repeaters.  That is why I used the
examples (the telephone game and round-

>trip (mis)translations) that I did which you did not address.

 

I don’t know the telephone game. The details are essential. It is not
essential where the data comes from and where it ends. Just the process of
translating internal data into a certain language and vice versa is
important.

 

>> You *believe* that language cannot be separated from intelligence. I
don’t and I have described a model which has a strict separation. We both
have no proof.

 

 

 >>>

Three points. 

1.   My statement was that intelligence can't be built without
language/communication.  That is entirely different from the fact that they
can't be separated.  I also gave reasoning why this was the case that you
haven't addressed.

<<< 

 

The main point in this discussion is whether language /communication can be
separated from intelligence.

It is clear that an AGI needs an interface for human beings. But the
question in this discussion is whether the language interface is a key point
in AGI or not. In my opinion it is no key point. It is just a communication
protocol. The real intelligence has nothing to do with language
understanding. Therefore we should use a simple formal hard coded language
for first AGI.

 

>>> 

2.   Your model has serious flaws that you have not answered.  You are
relying upon an analogy that has points that you have not shown that you are
able to defend.  Until you do so, this invalidates your model.

<<< 

 

I don’t see any problems with my model and I do not see any flaws which I
don’t have answered.

 

>>> 

3.  You have not provided a disproof or counter-example to what I am saying.
I have clearly specified where your analogy comes up short and other
inaccuracies in your statements while you have not done so for any of mine
(other than of the "tis too, tis not" variety).

 <<<

I haven’t seen any point where my analogy comes short.

 

>>> 

I have had the courtesy to directly address your points with clear
counter-examples.  Please return the favor and do not simply drop my
examples without replying to them and revert back

Re: AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

Matthias wrote:
> I don't think that learning of language is the entire
> point. If I have only
> learned language I still cannot create anything. A human
> who can understand
> language is by far still no good scientist. Intelligence
> means the ability
> to solve problems. Which problems can a system solve if it
> can nothing else
> than language understanding?

Language understanding requires a sophisticated conceptual framework complete 
with causal models, because, whatever "meaning" means, it must be captured 
somehow in an AI's internal models of the world.

The Piraha tribe in the Amazon basin has a very primitive language compared to 
all modern languages - it has no past or future tenses, for example - and as a 
people they exhibit barely any of the hallmarks of abstract reasoning that are 
so common to the rest of humanity, such as story-telling, artwork, religion... 
see http://en.wikipedia.org/wiki/Pirah%C3%A3_people.  

How do you explain that?

> >Einstein had to express his (non-linguistic) internal
> insights in natural
> language >and in mathematical language.  In both
> modalities he had to use
> his intelligence to >make the translation from his
> mental models. 
> 
> The point is that someone else could understand Einstein
> even if he haven't
> had the same intelligence. This is a proof that
> understanding AI1 does not
> necessarily imply to have the intelligence of AI1. 

I'm saying that if an AI understands & speaks natural language, you've solved 
AGI - your Nobel will be arriving soon.  The difference between AI1 that 
understands Einstein, and any AI currently in existence, is much greater then 
the difference between AI1 and Einstein.

> >Deaf people speak in sign language, which is only
> different from spoken
> language in >superficial ways. This does not tell us
> much about language
> that we didn't already >know.
> 
> But it is a proof that *natural* language understanding is
> not necessary for
> human-level intelligence.

Sorry, I don't see that, can you explain the proof?  Are you saying that sign 
language isn't natural language?  That would be patently false. (see 
http://crl.ucsd.edu/signlanguage/)

> I have already outlined the process of self-reflectivity:
> Internal patterns
> are translated into language. 

So you're agreeing that language is necessary for self-reflectivity. In your 
models, then, self-reflectivity is not important to AGI, since you say AGI can 
be realized without language, correct?

> This is routed to the
> brain's own input
> regions. You *hear* your own thoughts and have the illusion
> that you think
> linguistically.
> If you can speak two languages then you can make an easy
> test: Try to think
> in the foreign language. It works. If language would be
> inherently involved
> in the process of thoughts then thinking alternatively in
> two languages
> would cost many resources of the brain. In fact you need
> just use the other
> module for language translation. This is a big hint that
> language and
> thoughts do not have much in common.
> 
> -Matthias

I'm not saying that language is inherently involved in thinking, but it is 
crucial for the development of *sophisticated* causal models of the world - the 
kind of models that can support self-reflectivity. Word-concepts form the basis 
of abstract symbol manipulation. 

That gets the ball rolling for humans, but the conceptual framework that 
emerges is not necessarily tied to linguistics, especially as humans get 
feedback from the world in ways that are not linguistic (scientific 
experimentation/tinkering, studying math, art, music, etc). 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Many or most people on this list believe that learning language is an 
AGI-complete task.  What this means is that the skills necessary for 
learning a language are necessary and sufficient for learning any other 
task.  It is not that language understanding gives general intelligence 
capabilities, but that the pre-requisites for language understanding are 
general intelligence (or, that language understanding is isomorphic to 
general intelligence in the same fashion that all NP-complete problems are 
isomorphic).  Thus, the argument actually is that a system that "can do 
nothing else than language understanding" is an oxymoron.


*Any* human who can understand language beyond a certain point (say, that of 
a slightly sub-average human IQ) can easily be taught to be a good scientist 
if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.



Deaf people speak in sign language, which is only different from spoken
language in superficial ways. This does not tell us much about language
that we didn't already know.
But it is a proof that *natural* language understanding is not necessary 
for

human-level intelligence.


This is a bit of disingenuous side-track that I feel that I must address. 
When people say "natural language", the important features are extensibility 
and ambiguity.  If you can handle one extensible and ambiguous language, you 
should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).


If you can speak two languages then you can make an easy test: Try to 
think
in the foreign language. It works. If language would be inherently 
involved

in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the 
other

module for language translation. This is a big hint that language and
thoughts do not have much in common.


One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are 
claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it 
quite clear that language is *not* simply translation.


Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of 
it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his 
numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.




- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 19, 2008 2:52 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI




Terren wrote:



Isn't the *learning* of language the entire point? If you don't have an

answer for >how an AI learns language, you haven't solved anything.  The
understanding of >language only seems simple from the point of view of a
fluent speaker. Fluency >however should not be confused with a lack of
intellectual effort - rather, it's a >state in which the effort involved 
is

automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have 
only
learned language I still cannot create anything. A human who can 
understand

language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing 
else

than language understanding?


Einstein had to express his (non-linguistic) internal insights in natural

language >and in m

AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger

Terren wrote:


>Isn't the *learning* of language the entire point? If you don't have an
answer for >how an AI learns language, you haven't solved anything.  The
understanding of >language only seems simple from the point of view of a
fluent speaker. Fluency >however should not be confused with a lack of
intellectual effort - rather, it's a >state in which the effort involved is
automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have only
learned language I still cannot create anything. A human who can understand
language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing else
than language understanding?

>Einstein had to express his (non-linguistic) internal insights in natural
language >and in mathematical language.  In both modalities he had to use
his intelligence to >make the translation from his mental models. 

The point is that someone else could understand Einstein even if he haven't
had the same intelligence. This is a proof that understanding AI1 does not
necessarily imply to have the intelligence of AI1. 

>Deaf people speak in sign language, which is only different from spoken
language in >superficial ways. This does not tell us much about language
that we didn't already >know.

But it is a proof that *natural* language understanding is not necessary for
human-level intelligence.
 
>It is surely true that much/most of our cognitive processing is not at all
>linguistic, and that there is much that happens beyond our awareness.
However, >language is a necessary tool, for humans at least, to obtain a
competent conceptual >framework, even if that framework ultimately
transcends the linguistic dynamics that >helped develop it. Without language
it is hard to see how humans could develop self->reflectivity. 

I have already outlined the process of self-reflectivity: Internal patterns
are translated into language. This is routed to the brain's own input
regions. You *hear* your own thoughts and have the illusion that you think
linguistically.
If you can speak two languages then you can make an easy test: Try to think
in the foreign language. It works. If language would be inherently involved
in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the other
module for language translation. This is a big hint that language and
thoughts do not have much in common.

-Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of translating patterns into language should be easier than the
process of creating patterns or manipulating patterns. Therefore I say that
language understanding is easy. 

 

When you say that language is not fully specified then you probably imagine
an AGI which learns language.

This is a complete different thing. Learning language is difficult as I
already have mentioned.

 

Language cannot be translated into meaning. Meaning is a mapping from a
linguistic string to patterns.

 

Email programs are not just point to point repeaters.

They receive data in a certain communication protocol. They translate these
data into an internal representation and store the data. And they can
translate their internal data into a linguistic representation to send the
data to another email client. This process  of communication is conceptually
the same as we can observe it with humans.

The word "meaning" was bad chosen from me. But brains do not transfer
meaning as well. They also just transfer  data. Meaning is a mapping. 

 

You *believe* that language cannot be separated from intelligence. I don't
and I have described a model which has a strict separation. We both have no
proof.

 

- Matthias

 

>>> 

Mark Waser [mailto:[EMAIL PROTECTED]  wrote



 

 

BUT!  This also holds true for language!  Concrete unadorned statements
convey a lot less information than statements loaded with adjectives,
adverbs, or even more markedly analogies (or innuendos or . . . ).

A child cannot pick up the same amount of information from a sentence that
they think that they understand (and do understand to some degree) that an
adult can.

Language is a knowledge domain like any other and high intelligences can use
it far more effectively than lower intelligences.

 

** Or, in other words, I am disagreeing with the statement that "the process
itself needs not much intelligence".

 

Saying that the understanding of language itself is simple is like saying
that chess is simple because you understand the rules of the game.

Godel's Incompleteness Theorem can be used to show that there is no upper
bound on the complexity of language and the intelligence necessary to pack
and extract meaning/knowledge into/from language.

 

Language is *NOT* just a top-level communications protocol because it is not
fully-specified and because it is tremendously context-dependent (not to
mention entirely Godellian).  These two reasons are why it *IS* inextricably
tied into intelligence.

 

I *might* agree that the concrete language of lower primates and young
children is separate from intelligence, but there is far more going on in
adult language than a simple communications protocol.

 

E-mail programs are simply point-to-point repeaters of language (NOT
meaning!)  Intelligences generally don't exactly repeat language but *try*
to repeat meaning.  The game of telephone is a tremendous example of why
language *IS* tied to intelligence (or look at the results of translating
simple phrases into another language and back -- "The drink is strong but
the meat is rotten").  Translating language to and from meaning (i.e. your
domain model) is the essence of intelligence.

 

How simple is the understanding of the above?  How much are you having to
fight to relate it to your internal model (assuming that it's even
compatible :-)?

 

I don't believe that intelligence is inherent upon language EXCEPT that
language is necessary to convey knowledge/meaning (in order to build
intelligence in a reasonable timeframe) and that language is influenced by
and influences intelligence since it is basically the core of the critical
meta-domains of teaching, learning, discovery, and alteration of your
internal model (the effectiveness of which *IS* intelligence).  Future AGI
and humans will undoubtedly not only have a much richer language but also a
much richer repertoire of second-order (and higher) features expressed via
language.

 

** Or, in other words, I am strongly disagreeing that "intelligence is
separated from language understanding".  I believe that language
understanding is the necessary tool that intelligence is built with since it
is what puts the *contents* of intelligence (i.e. the domain model) into
intelligence .  Trying to build an intelligence without language
understanding is like trying to build it with just machine language or by
using only observable data points rather than trying to build those things
into more complex entities like third-, fourth-, and fifth-generation
programming languages instead of machine language and/or knowledge instead
of just data points.

 

BTW -- Please note, however, that the above does not imply that I believe
that NLU is the place to start in developing AGI.  Quite the contrary -- NLU
rests upon such a large domain model that I believe that it is
counter-productive to start there.  I believe that we need to star with
limited domains and learn about language, internal models, and grounding
without britt

Re: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Terren Suydam

--- On Sun, 10/19/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> Every email program can receive meaning, store meaning and
> it can express it
> outwardly in order to send it to another computer. It even
> can do it without
> loss of any information. Regarding this point, it even
> outperforms humans
> already who have no conscious access to the full meaning
> (information) in
> their brains.

Email programs do not store meaning, they store data. The email program has no 
understanding of the stuff it stores, so this is a poor analogy. 
 
> The only thing which needs much intelligence from the
> nowadays point of view
> is the learning of the process of outwardly expressing
> meaning, i.e. the
> learning of language. The understanding of language itself
> is simple.

Isn't the *learning* of language the entire point? If you don't have an answer 
for how an AI learns language, you haven't solved anything.  The understanding 
of language only seems simple from the point of view of a fluent speaker. 
Fluency however should not be confused with a lack of intellectual effort - 
rather, it's a state in which the effort involved is automatic and beyond 
awareness.

> To show that intelligence is separated from language
> understanding I have
> already given the example that a person could have spoken
> with Einstein but
> needed not to have the same intelligence. Another example
> are humans who
> cannot hear and speak but are intelligent. They only have
> the problem to get
> the knowledge from other humans since language is the
> common social
> communication protocol to transfer knowledge from brain to
> brain.

Einstein had to express his (non-linguistic) internal insights in natural 
language and in mathematical language.  In both modalities he had to use his 
intelligence to make the translation from his mental models. 

Deaf people speak in sign language, which is only different from spoken 
language in superficial ways. This does not tell us much about language that we 
didn't already know. 

> In my opinion language is overestimated in AI for the
> following reason:
> When we think we believe that we think in our language.
> From this we
> conclude that our thoughts are inherently structured by
> linguistic elements.
> And if our thoughts are so deeply connected with language
> then it is a small
> step to conclude that our whole intelligence depends
> inherently on language.

It is surely true that much/most of our cognitive processing is not at all 
linguistic, and that there is much that happens beyond our awareness. However, 
language is a necessary tool, for humans at least, to obtain a competent 
conceptual framework, even if that framework ultimately transcends the 
linguistic dynamics that helped develop it. Without language it is hard to see 
how humans could develop self-reflectivity. 

Terren

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
What the computer makes with the data it receives depends on the information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in virtual
worlds. 
If the AGI shall understand natural language it would be necessary that it
makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there have
to be angry dogs in the virtual world ;-) 

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.

- Matthias



>>>
William Pearson [mailto:[EMAIL PROTECTED] wrote


If I specify in a language to a computer that it should do something,
it will do it no matter what (as long as I have sufficient authority).
Telling a human to do something, e.g. wave your hands in the air and
shout, the human will decide to do that based on how much it trusts
you and whether they think it is a good idea. Generally a good idea in
a situation where you are attracting the attention of rescuers,
otherwise likely to make you look silly.

I'm generally in favour of getting some NLU into AIs mainly because a
lot of the information we have about the world is still in that form,
so an AI without access to that information would have to reinvent it,
which I think would take a long time. Even mathematical proofs are
still somewhat in natural language. Other than that you could work on
machine language understanding where information was taken in
selectively and judged on its merits not its security credentials.

  Will Pearson





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
loss of any information. Regarding this point, it even outperforms humans
already who have no conscious access to the full meaning (information) in
their brains.

The only thing which needs much intelligence from the nowadays point of view
is the learning of the process of outwardly expressing meaning, i.e. the
learning of language. The understanding of language itself is simple.

To show that intelligence is separated from language understanding I have
already given the example that a person could have spoken with Einstein but
needed not to have the same intelligence. Another example are humans who
cannot hear and speak but are intelligent. They only have the problem to get
the knowledge from other humans since language is the common social
communication protocol to transfer knowledge from brain to brain.

In my opinion language is overestimated in AI for the following reason:
When we think we believe that we think in our language. From this we
conclude that our thoughts are inherently structured by linguistic elements.
And if our thoughts are so deeply connected with language then it is a small
step to conclude that our whole intelligence depends inherently on language.

But this is a misconception.
We do not have conscious control over all of our thoughts. Most of the
activities within our brain we cannot be aware of when we think.
Nevertheless it is very useful and even essential for human intelligence
being able to observe at least a subset of the own thoughts. It is this
subset which we usually identify with the whole set of thoughts. But in fact
it is just a tiny subset of all what happens in the 10^11 neurons.
For the top-level observation of the own thoughts the brain uses the learned
language. 
But this is no contradiction to the point that language is just a
communication protocol and nothing else. The brain translates its patterns
into language and routes this information to its own input regions.

The reason why the brain uses language in order to observe its own thoughts
is probably the following:
If a person A wants to communicate some of its patterns to a person B then
it has solve two problems:
1. How to compress the patterns?
2. How to send the patterns to the person B?
The solution for the two problems is language.

If a brain wants to observe its own thoughts it has to solve the same
problems.
The thoughts have to be compressed. If not you would observe every element
of your thoughts and you would end up in an explosion of complexity. So why
not use the same compression algorithm as it is used for communication with
other people? That's the reason why the brain uses language when it observes
its own thoughts. 

This phenomenon leads to the misconception that language is inherently
connected with thoughts and intelligence. In fact it is just a top level
communication protocol between two brains and within a single brain.

Future AGI will have a much broader bandwidth and even for the current
possibilities of technology human language would be a weak communication
protocol for its internal observation of its own thoughts.
 
- Matthias

>>>
Terren Suydam wrote:


Nice post.

I'm not sure language is separable from any kind of intelligence we can
meaningfully interact with.

It's important to note (at least) two ways of talking about language:

1. specific aspects of language - what someone building an NLP module is
focused on (e.g. the rules of English grammar and such).

2. the process of language - the expression of the internal state in some
outward form in such a way that conveys shared meaning. 

If we conceptualize language as in #2, we can be talking about a great many
human activities besides conversing: playing chess, playing music,
programming computers, dancing, and so on. And in each example listed there
is a learning curve that goes from pure novice to halting sufficiency to
masterful fluency, just like learning a language. 

So *specific* forms of language (including the non-linguistic) are not in
themselves important to intelligence (perhaps this is Matthias' point?), but
the process of outwardly expressing meaning is fundamental to any social
intelligence.

Terren




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you can build a system which understands human language you are still far
away from AGI.
Being able to understand the language of someone else does no way imply to
have the same intelligence. I think there were many people who understood
the language of Einstein but they were not able to create the same.

Therefore it is better to build a system which is able to create things
instead of only being able to understand things.

Language understanding is easy for nearly every little child.
But mathematics is hard for most people and for computers today.

If you say mathematics is too narrow then this implies either the world
cannot be modeled by mathematics or the world itself is too narrow.

- Matthias


>>>
Andi wrote:

Matthias wrote:

> There is no big depth in the language. There is only depth in the
> information (i.e. patterns) which is transferred using the language.

This is a claim with which I obviously disagree.  I imagine linguists
would have trouble with it, as well.

And goes on to conclude:
> Therefore I think, the ways towards AGI mainly by studying language
> understanding will be very long and possibly always go in a dead end.

It seems similar to my point, too.  That's really what I see as a
definition of AI-complete as well.  If you had something that could
understand language, it would have to be able to do everything that a full
intelligence would do.  It seems there is a claim here that one could have
something that understands language but doesn't have anything else
underneath it.  Or maybe that language could just be something separated
away from some real intelligence lying underneath, and so studying just
that would be limiting.  And that is a possibility.  There are certainly
specific "language modules" that people have to assist them with their use
of language, but it does seem like intelligence is more integrated with
it.

And somebody suggested that it sounds like Matthias has some kind of
mentalese hidden down in there.  That spoken and written language is not
interesting because it is just a rearrangement of whatever internal
representation system we have.  That is a fairly bold claim, and has
logical problems like a homunculus.  It is natural for a computer person
to think that mental things can be modifiable and transmittable strings,
but it would be hard to see how that would work with people.

Also, I get a whole sense that Matthias is thinking there might be some
small general domain where we can find a shortcut to AGI.  No way. 
Natural language will be a long, hard road.  Any path going to a general
intelligence will be a long, hard road.  I would guess.  It still happens
regularly that people will say they're cooking up the special sauce, but I
have seen that way too many times.

Maybe I'm being too negative.  Ben is trying to push this list to being
more positive with discussions about successful areas of development.  It
certainly would be nice to have some domains where we can explore general
mechanism.  I guess the problem a see with just math as a domain is that
the material could get too narrow a focus.  If we want generality in
intelligence, I think it is helpful to be able to have a possibility that
some bit of knowledge or skill from one domain could be tried in a
different area, and it is my claim that general language use is one of the
few areas where that happens.


andi






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com