Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread Ben Goertzel
oops, i meant 1895 ... damn that dyslexia ;-) ... though the other way was
funnier, it was less accurate!!

On Sat, Oct 11, 2008 at 8:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


   I'm only pointing out something everybody here knows full well:
 embodiment in various forms has, so far, failed to provide any real help in
 cracking the NLU problem.  Might it in the future?  Sure.  But the key word
 there is might.



 To me, you sound like a guy in 1985 saying So far, wings have failed to
 provide any real help in cracking the human flight problem

 ben g




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread Ben Goertzel
   I'm only pointing out something everybody here knows full well:
 embodiment in various forms has, so far, failed to provide any real help in
 cracking the NLU problem.  Might it in the future?  Sure.  But the key word
 there is might.



To me, you sound like a guy in 1985 saying So far, wings have failed to
provide any real help in cracking the human flight problem

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread Brad Paulsen

Dave,

Well, I thought I'd described how pretty well.  Even why.  See my recent 
conversation with Dr. Heger on this list.  I'll be happy to answer specific 
questions based on those explanations but I'm not going to repeat them 
here.  Simply haven't got the time.


Although I have not been asked to do so, I do feel I need to provide an ex 
post facto disclaimer.  Here goes:


I am aware of the approach being taken by Stephen Reed in the Texai 
project.  I am currently associated with that project as a volunteer.  What 
I have said previously in this regard) is, however, my own interpretation 
and opinion insofar as what I have said concerned tactics or strategies 
that may be similar to those being implemented in the Texai project.  I'm 
pretty sure my interpretations and opinions are highly compatible with 
Steve's views even though they may not agree in every detail.  My comments 
should NOT, however, be taken as an official representation of the Texai 
project's tactics, strategies or goals.  End disclaimer.


I was asked by Dr. Heger to go into some of the specifics of the strategy I 
had in mind.  I honored his request and wrote quite extensively (for a list 
posting -- sorry 'bout that) about that strategy.  I have not argued, nor 
do I intend to argue, that I have an approach to AGI that is better, faster 
or more economical than approach X.  Instead, I have simply pointed out 
that NLU and embodiment problems have proven themselves to be extremely 
difficult (indeed, intractable to date).  I, therefore, on those grounds 
alone, believe (and it's just an OPINION, although I believe a 
well-reasoned one) that we will get to a human-beneficial AGI sooner (and, 
I guess, probably, therefore, cheaper) if we side-step those two proven 
productivity sinks.  For now.  End of argument.


I'm not trying to sell my AGI strategy or agenda to you or anyone else. 
Like many people on this list who have an opinion on these matters, I have 
a background as a practitioner in AI that goes back over twenty years. 
I've designed and written narrow-AI production (expert) system engines 
and been involved in knowledge engineering using those engines.  The 
results of my efforts have saved large corporations millions of dollars (if 
not billions, over time).  I can assure you that most of the humans who saw 
these systems come to life and out-perform their own human experts, were 
pretty sure I'd succeeded in getting a human into the box.  To them, it was 
already AGI.  I'd gotten a computer to do something only a human being 
(their employee) had theretofore been able to do.  And I got the computer 
to do it BETTER and FASTER.  Of course, these were mostly non-technical 
people who didn't understand the technology (in many cases had never even 
heard of it) and so, to them, there was a bit of magic involved.  We, 
here, of course know that was not the case.  While the stuff I built back 
in the 1980's and 1990's may not have been snazzy, wiz-bang AI with 
conversational robots and the whole Sci-Fi thing, it was still damn 
impressive and extremely human-beneficial.  No NLU.  No embodiment.


I don't claim to have a better way to get to AGI, just a less risky way. 
Based on past experience (in the field).  I have never intended to 
criticize any particular AGI approach.  I have not tried to show that my 
approach is conceptually superior to any other approach on any specific 
design point.  Indeed, I firmly believe that a multitude of vastly 
different approaches to this problem is a good thing.  At least initially.


As far as OCP's approach to embodiment is concerned, again it's neither the 
specifics nor the novelty of any particular approach that concerns me.  The 
efficacy of any approach to the embodiment problem can only be determined 
once it has been tried.  I'm only pointing out something everybody here 
knows full well: embodiment in various forms has, so far, failed to provide 
any real help in cracking the NLU problem.  Might it in the future?  Sure. 
 But the key word there is might.  When you go to the track to bet on a 
horse, do you look for the nag that's come in last or nearly last in every 
previous race that season and say to yourself, Hey, I have a novel betting 
strategy and, regardless what history shows (and the odds-makers say), I 
think I can make a killing here by betting the farm on that consistent 
loser!  Probably not.  Why?  Because past performance, while not a 
guarantee of future performance, is really the only thing you have to go 
on, isn't it?


Cheers,
Brad

P.S.  Back in the early 1970's I once paid for a weekend of debauchery in 
Chicago from the proceeds of my $10 bet on a 20-to-1 horse at Arlington 
Park race track because I liked the name, She's a Dazzler.  So it can 
happen.  The only question is: How much do you want to bet? ;-)


David Hart wrote:

Brad,

Your post describes your position *very* well, thanks.

But, it does not describe *how* or *why* your AI system might 

Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread David Hart
Hi Brad,

An interesting point of conceptual agreement between OCP and Texai designs
is that very specifically engineered bootstrapping processes are necessary
to push into AGI territory. Attempting to summarize using my limited
knowledge, Texai hopes to achieve that boostrapping via reasoning over
commonsense knowledge which has been acquired via a combination of
expert-system data entry and unsupervised learning. OCP hopes to achieve
that boostrapping via a combination of embodied interactive learning and
reasoning supplemented with narrow-AI NL components (wordnet, RelEx semantic
comprehension, RelEx NLgen, etc.). Of course, each project has their own
reasons for believing that their approach is the most tractable and the
least likely to become stuck in the AI-rabbitholes of the past.

I believe that surface comparisons of most modern AGI-oriented designs
cannot be used to make 'likelihood to proceed faster than others'
predictions with sufficient confidence to weave convincing arguments over an
email medium. So, making assertions about a design being 'better, faster,
cheaper, less risky, etc.' are okay, if those assertions are clearly
opinions (being backed up in writing is good, but that generally requires
paper and book length treatment) and agreements to disagree are arrived at
readily (without resorting to digressions about straw men to undermine
others positions). The goal of this structure for this aspect of list
discussion is to create an atmosphere where everyone can learn as much as
possible about competing AGI designs. I think we're all saying effectively
the same thing here, so we should be able to agree to agree on this point.

IMO, it's more productive to highlight the reasons why your [insert AGI
design here] system might work, rather than obsessing on the flaws of other
designs. E.g, it's really not useful to repeatedly press the fact that past
[grossly insufficient] attempts at NLU and embodiment have been abject
failures, since *ALL* past attempts at AGI have fallen short of the mark,
including knowledge-based expert-system with reasoning-bolted-on approaches.
Furthermore, if all of science and engineering used the conservative logic
that past performance [...] is really the only thing you have to go on,
then we'd still be stuck with Victorian-level science and technology, since
all of the great leaps where past performance WASN'T the best indicator
would have been missed.

On to a positive argument for the OCP design, the simple explanation for why
embodiment in various forms has, so far, failed to provide any real help in
cracking the NLU problem,  is that all past attempts at embodiment have
been incredibly crude and grossly insufficient. The technologies that might
allow for fine realtime motor control and perception (including
proprioception, or even hacks like good inverse kinematics, and other
subtleties) in real or virtual settings have simply not yet been
sufficiently developed. Any roboticist or virtual world programmer can
confirm this assertion. One aspect of OCP development focuses on this issue
and is working with the realXtend developers to enhance OpenSim to provide
sufficient functionality to enable ever more sophisticated
perception-action-reasoning loops (we'd also like to work with robot
simulation and control software at some later stage); this work will likely
be written up in a paper sometime next year.

-dave

On Sat, Oct 11, 2008 at 9:52 PM, Brad Paulsen [EMAIL PROTECTED]wrote:

 Dave,

 Well, I thought I'd described how pretty well.  Even why.  See my recent
 conversation with Dr. Heger on this list.  I'll be happy to answer specific
 questions based on those explanations but I'm not going to repeat them here.
  Simply haven't got the time.

 Although I have not been asked to do so, I do feel I need to provide an ex
 post facto disclaimer.  Here goes:

 I am aware of the approach being taken by Stephen Reed in the Texai
 project.  I am currently associated with that project as a volunteer.  What
 I have said previously in this regard) is, however, my own interpretation
 and opinion insofar as what I have said concerned tactics or strategies that
 may be similar to those being implemented in the Texai project.  I'm pretty
 sure my interpretations and opinions are highly compatible with Steve's
 views even though they may not agree in every detail.  My comments should
 NOT, however, be taken as an official representation of the Texai
 project's tactics, strategies or goals.  End disclaimer.

 I was asked by Dr. Heger to go into some of the specifics of the strategy I
 had in mind.  I honored his request and wrote quite extensively (for a list
 posting -- sorry 'bout that) about that strategy.  I have not argued, nor do
 I intend to argue, that I have an approach to AGI that is better, faster or
 more economical than approach X.  Instead, I have simply pointed out that
 NLU and embodiment problems have proven themselves to be extremely difficult
 (indeed, 

Re: [agi] It is more important how AGI works than what it can do.

2008-10-06 Thread Brad Paulsen



Dr. Matthias Heger wrote:

Brad Pausen wrote The question I'm raising in this thread is more one of
priorities and allocation of scarce resources.  Engineers and scientists
comprise only about 1% of the world's population.  Is human-level NLU
worth the resources it has consumed, and will continue to consume, in
the pre-AGI-1.0 stage? Even if we eventually succeed, would it be worth
the enormous cost? Wouldn't it be wiser to go with the strengths of
both humans and computers during this (or any other) stage of AGI
development? 

I think it is not so important what abilities our first AGIs will have. 
Human language would be a nice feature but it is not necessary.


Agreed.  And nothing in the above quote indicates otherwise.  I'm only 
arguing that we should not spend scarce resources now (or ever, really) 
trying to implement unnecessary features.  Both human-level NLU and 
human-like embodiment are, in my considered opinion, unnecessary for AGI 1.0.



It is more important how it works. We want to develop an intelligent
software which has the potential to solve very different problems in
different domains. This is the main idea of AGI.

Imagine someone thinks he has build an AGI. How can he convince the
community that it is in fact AGI and not AI? If he shows some
applications where his AGI works then this is an indication for the G in
his AGI but it is no proof at all. 


I agree.

Even a turing test would be no good

test because given n questions for the AGI I can never be sure whether
it can pass the test for further n questions. 


Ah, I see you've met my friend Mr. David Hume.


AGI is inherently a white
box problem not a black box problem.



A chess playing computer is for many people a stunning machine. But we
know HOW it works and only(!) because we know the HOW we can evaluate
the potential of this approach for general AGI.

Brad, for this reason I think your question about whether the first AGI
should have the ability for human language or not is not so important.
If you can create a software which has the ability to solve very
different problems in very different domains than you have solved the
main problem of AGI.


Actually, I disagree with you here.  There is really no need to create a 
single AGI that can solve problems in multiple domains.  Most humans can't 
do that.  We can, more easily, I believe, coordinate a network of AGI 
agents that are, each, experts in a single domain.  These experts would be 
trained by human experts (as well as be able to learn from experience) and 
would be able to exchange information across domains as needed (need being 
determined, perhaps, by an expert supervisor AGI agent).  None of these 
agents, alone, would qualify as AGI (because they are narrow-domain 
experts).  The system in which these AGI experts function would, however, 
constitute true AGI.


Your reply makes it sound like I have a question about whether the first 
AGI should have human language ability.  I have no question about this. 
What I have is an informed opinion.  It is this: requiring solution of an 
AI-complete problem (human-level NLU) is the kiss of death for the success 
of any AGI concept.  If we let go of this strategy and concentrate on 
making non-human-like intelligences (using already-proven AI strategies 
that do not rely on NLU or embodiment and that leverage the strengths of 
the only non-human intelligence we have at present), I believe we will get 
to much more powerful AGI much sooner.


My concept of AGI holds that creating many different domain experts using 
proven, narrow-AI technology and, then, coordinating a vast network of 
these domain experts to identify/solve complex, cross-domain problems (in 
real-time and concurrently if necessary) will, in fact, result in a system 
that has a problem-solving capability greater than any single human being. 
 Without requiring human-level NLU or embodiment. It will be more robust 
(massive redundancy, such as that found in biologically-evolved systems is 
the key here) than any human being, be quicker than any human being and be 
more accurate than any human being (or, especially, organization of human 
beings -- have you ever tried to get an error in you HMO medical records 
corrected?).


For example, in the (near, I hope) future when you feel sick, you will sit 
down at your computer and call up a medical practitioner (GP) AGI agent (it 
doesn't really matter from where, but assume from the Internet).  This will 
be the same GP AGI agent anyone else anywhere in the world would call up 
(except, of course, each human is invoking a, localized, instance of the GP 
AGI agent).  Note that you're ahead of the game already.  You didn't have 
to wait two weeks to get an appointment (at 7AM in the morning).  You 
didn't have to go to a remote location (the doctor's office or clinic). The 
visit to your doctor is already much less stressful, a medical benefit in 
and of itself.  Once the GP AGI responds, you will only have 

Re: [agi] It is more important how AGI works than what it can do.

2008-10-06 Thread David Hart
Brad,

Your post describes your position *very* well, thanks.

But, it does not describe *how* or *why* your AI system might achieve domain
expertise any faster/better/cheaper than other narrow-AI systems (NLU
capable, embodied, or otherwise) on its way to achieving networked-AGI. The
list would certainly benefit from any such exposition!

On a smaller point of clarification, the OCP 'embodied' design will not
attempt to simulate deep human behavior, but rather kluge good enough
humanesque and non-humanesque embodiment to provide *grounding* for good
enough solutions in a wide variety of situations (sub-adult performance in
some situaitons and better-than-genius performance in others) including NLU,
types of science that require massive information synthesis and creative
leaps in thinking (inlcuding in non-everday-human contexts such as
nanoscopic quantum scales or macroscopic relativistic scales), plus other
interesting areas such as industry, economics, public policy, arts, etc.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com