In one way you are right Mike, as a programmer, I can see subtle similarities 
between what I am thinking about and the kind of program structures that non 
AGI programs contain.  And of course a text-based IO AGI program will only be 
able to learn about some of things that are described in text.  But a 
multi-modal program will only be able to learn some of the things that are 
'describe' in the various IO modalities.
 
But there is so much wrong with what you say.  Do you really know that much 
about architectural design?  I doubt it.  So can we conclude that you are not 
intelligent?  No because we know that you are able to learn some things even 
though there are many things that you don't understand very well.
 
Jim Bromer
 
Date: Sun, 4 Aug 2013 18:29:16 +0100
Subject: Re: [agi] A Very Simple AGI Project
From: [email protected]
To: [email protected]

JB: My interest then is in finding the 'right way' to combine some narrow AI 
algorithms to produce AGI.


So if you combine a prog. that can design lego houses and another that can 
design skyscrapers, you will get an architectural designer program that can 
design any structure whatsoever from mud huts to shanty huts to rock houses to 
columns of toys like a human?

Why won't this "cognitive synergy" approach (aka Ben) ever work? Why is it 
simple-minded?

Oh, you're going to add an executive level to the program, something say that's 
going to think about "building" in general terms, and that'll make a difference?

Unfortunately, there isn't any program that can think in concepts like 
"building", "dwellings", "put together, "take apart" and truly generalise. 
That's the unsolved conceptualisation problem of AGI. And you're not offering a 
fraction of a new idea how to solve it. 

So basically your approach adds up to "more of the same" (cognitive synergy) 
plus "magic sauce" (executive conceptual level to program). And nothing new.


On 4 August 2013 18:03, Jim Bromer <[email protected]> wrote:










> From: [email protected]
> No. AGI is unsolved. I am saying that there are machine learning
> algorithms that build on previous learning. For example, the LZW

> compression algorithm builds its dictionary by extending the words it
> has already learned. Of course this is not AGI because it meets none
> of the requirements of solving language, vision, hearing, robotics,

> art, and predicting human behavior all with human level ability. But
> you don't claim to be trying to solve these problems either.
> 
> But you haven't answered my questions. Exactly what will your program

> do? How will you demonstrate that your program "builds on previous
> learning"? What are the tests that you will give it?
 
 
 
Complexity is too great a problem. 
1. What I have said is that if I could get my program to work with text based 
IO then it could be modified to work with vision hearing robotics etcetera.  
The idea that it could predict human behavior with human level ability is from 
your definition, not mine.

2. What I have directly implied is that if I can't get it to work with 
text-based IO I wouldn't be able to get it to work with vision hearing robotics 
etcetera.
 
However, to continue to get nearer to answering your questions.  Since there 
are numerous algorithms (narrow AI) that can excel at tasks that even we would 
not be able to do, this indicates that if I was able to write a simple AGI 
program (which is much less able than the majority of human beings) then there 
would be numerous AI things that my program would not be able to even try.   So 
I would have trouble demonstrating that my Simple AGI was better than other 
programs on the kinds of tasks that they excelled in even if the other program 
was narrow AI.  
 If simple narrow AI algorithms that input numerical values from an IO data 
environment and output values in a numerical range were mixed together in the 
right way, we could use the old equivalency argument to argue that even the 
simplest narrow AI programs could hypothetically be combined to produce AGI.   
This argument suggests that machine learning algorithms are not much more than 
a step up from the earlier AI methods.  My interest then is in finding the 
'right way' to combine some narrow AI algorithms to produce AGI.
 But many people in this group agree that narrow AI programs are not AGI and so 
my goal is to build an simple AGI program that could be pitted against other 
AGI programs.  Even then, another AGI program that included some powerful 
narrow methods could presumably beat my program in those particular challenges. 
 So I started describing conceptual integration and building on previous 
learning as a means to begin finding a working definition of what is needed for 
an AGI approach.  Your responses have give me great hope that I am as far along 
in my theories as I thought I was.

 
I started to answer your question by pointing to a number of ideas.  Think 
about the problem of an executive function (starting with meta-awareness) .  
And look at the integration problem where integration involves more than 
subsequent numerical or logical inclusion of a simple narrow kind.  The 
executive 'function' won't be involved in every detail but it will note some of 
the characteristics of the algorithms as they relate to the context of the 
application of the algorithm.  The idea that an executive function should have 
some greater awareness of the context of an application of an algorithm 
sometimes seems to me as if it was a mandatory requirement of AGI.  The 
executive function will not be able to unerringly tell if an applied algorithm 
works or not so it will have to rely on other methods like cross-analysis and 
attention to structural integration methods.  If an analysis leads to a strong 
integration of a model that the program had been building than the application 
of the algorithm in that situation will be explored further.  But obviously, 
there has to be a guard against self-confirming artificial delusion.
 So while a narrow-AI algorithm could easily defeat my program at the tasks 
that it was explicitly designed for, my program, if it works, will be able to 
place an idea that it had learned about in a greater context of ideas.  It will 
be able, if it works, to talk about the idea that it had just learned about.  
Since my program has got to be simple, every thing would be at a very primitive 
level but it would still be better than a calculator function which does not 
know much of anything about the contexts that it is used in.
 Even this simple example, describing how an AGI program with some executive 
meta-awareness that is aware of some of the characteristics of the context of 
an application of a function is different than a calculator function is more 
subtle than it might first seem.   Depending on what you call awareness or 
meta-level executive function one might argue (and wisely so in my opinion) 
that since certain calculator functions can produce error remarks in certain 
situations that even calculator functions have some meta-level executive 
artificial 'awareness' of the application of the function.  The only difference 
is that the calculator 'speaks' and is programmed in different kinds of 
languages.  So then the meta-awareness and executive functions of an AGI 
program must be shaped partially on previous learning so that it can learn to 
speak sensibly about the context and results of an application of a narrower AI 
method in ways that it hadn't before and in ways that could be different for 
different deployments of the native program.
 Of course, since my program has to be extremely simple, this 'sensible 
conversation' is going to be very primitive and open to criticism. 
Jim Bromer
 
 
 
 
 
 
 
 
> Date: Sat, 3 Aug 2013 19:53:31 -0400
> Subject: Re: [agi] A Very Simple AGI Project
> From: [email protected]

> To: [email protected]
> 
> On Sat, Aug 3, 2013 at 6:07 PM, Jim Bromer <[email protected]> wrote:

> >
> > From: [email protected]
> >
> > "Building on previous learning" is kind of vague. Doesn't any machine 
> > learning algorithm do that? How will you test your program, measure the 
> > results, and compare it to other approaches to solving the same problems?

> > -----------
> >
> > Are you saying that there are machine learning algorithms that constitute 
> > working AGI programs?
> 
> No. AGI is unsolved. I am saying that there are machine learning

> algorithms that build on previous learning. For example, the LZW
> compression algorithm builds its dictionary by extending the words it
> has already learned. Of course this is not AGI because it meets none

> of the requirements of solving language, vision, hearing, robotics,
> art, and predicting human behavior all with human level ability. But
> you don't claim to be trying to solve these problems either.

> 
> But you haven't answered my questions. Exactly what will your program
> do? How will you demonstrate that your program "builds on previous
> learning"? What are the tests that you will give it?

> 
> -- 
> -- Matt Mahoney, [email protected]
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now

> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-f5817f28
> Modify Your Subscription: https://www.listbox.com/member/?&;

> Powered by Listbox: http://www.listbox.com


                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  


                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to