One more thing.  If an AGI program worked then it would effectively be able to 
operate with more than one meta-cognitive level regardless of whether it was 
specifically coded that way. And even if it was coded to only work with one 
meta-cognitive level it would still effectively work on more than one level by 
the nature of the specific uses (the runtime instantiation of the code).  So in 
a sense my argument for multiple levels of meta-cognition is not based on a 
necessary specification (even if my reasoning in these messages was correct.)  
But look at what this is saying.  The premise here contains the desired 
conclusion.   If the AGI program worked...  I am saying that in order for a 
program to work as a true AGI program some process of meta-cognition has to be 
able to work with specific types of derived products.  The program  has to be 
able to develop these derived products and turn combinations of simple derived 
products into types so that it can try variations of these types in similar 
circumstances.  So the methods that it will develop to analyze and respond to 
the IO data environment are going to be creative and it can refer to them 
symbolically.  It will take kinds of events and refer to them as symbols and it 
will be able to then use these symbolic references in order to apply higher 
abstractions on them.  I am saying that this is the essence of creative general 
intelligence and since it is, it is very likely going to be the secret sauce 
which can develop insightful ways to analyze data.  So, sure, if a carefully 
designed system was able to achieve true general intelligence, then, if this 
theory is right, it would be able to do what I am saying is necessary for true 
general intelligence regardless of how the programmer thought about it.  But I 
think if this theory is right then it is much more likely that the programmer 
who thinks about these things explicitly will be more likely to make some 
progress with them.  The parts of the reasoning processes have to be used by 
meta-cognitive processes explicitly and symbolically. However, there is no 
reason for details of these trial and error processes to come to the surface 
attention of the mind unless they are recognized as being important in some way.
 
So I am taking the ego-argument that while many of us have thought about these 
sorts of things before none of us have thought about it in just this way until 
now.  Since a meta-cognitive process will be instanced there may be many 
executive functions (meta-cognition) operating and the most conscious executive 
function does not need to know exactly what the creative instances are doing at 
any one moment.  This means, for example, that many good preliminary analytical 
functions may be independently derived by the program rather than be fully 
specified by the programmer.   
Jim Bromer
 
From: [email protected]
To: [email protected]
Subject: RE: [agi] A Very Simple AGI Project
Date: Sun, 21 Jul 2013 13:29:19 -0400




I was trying to say that in my paradigm the parts of the acquired (learned) 
objects of analysis, recognition, and response have to be treated as being 
potentially distinct so that the run time (or acquired-learned) meta-cognitive 
functions can use them explicitly.  However, these meta-level cognitive 
functions may not be available directly to the highest level of meta-awareness 
so the program would not be able to directly report on these functions.  So I 
guess I am saying that there must be different depths of meta-awareness in my 
paradigm as well.
 
Of course other theories include various functions (or 
partially-defined-functions) which can use the data from an observation or from 
derived data to play various roles, such as being used in a comparison (which 
then may also be used in different ways), but, my conclusion is that since no 
one has ever discussed this sort of thing with me they must either have 
concluded that the implied distinctions and utilizations of their theories were 
adequate (because they did not have to consider this as being a part of a trial 
and error meta-cognition) or because they had never thought of these parts of 
processes as explicit *types* of output that some mechanism of analysis might 
create and be used explicitly and creatively.  Either way it is the same thing. 
For some reason people just have not thought of the various internal processes 
as being comprised of typed parts which could be used in explicit strings or 
multi-dimensional spaces of meta-cognitive processes.
 
There are many cases where these different parts of the intelligent process 
just cannot be combined without explicitly recognizing the significance and 
roles of the various parts.
Jim Bromer
 
From: [email protected]
To: [email protected]
Subject: RE: [agi] A Very Simple AGI Project
Date: Sun, 21 Jul 2013 11:03:10 -0400










I cannot give an easy example of the problem where Bayesian systems are not 
able to distinguish between situations when information from different sources 
can be combined and when it can't because I am not that familiar with the 
language of statistics so I can't use simple references that might point you in 
the direction of my ideas.

When a human being uses applied statistics he can combine his general knowledge 
of the world with his specialized knowledge of the science of statistics to 
find ways to make his research more effective and insightful.  But in 
statistical-based-AGI we are asking our automated computer programs to use 
statistical analysis to effectively learn how to create  better models about 
reality.  The belief that this makes sense as a basis for AGI strikes me as 
absurd.  So while Bayesian methods and weighted reasoning in general are almost 
certainly important tools for AGI, they do not constitute a sound basis - in 
themselves - for methods that can produce self-improving insights about the 
world.  An analogous criticism may be applied to any kind of mathematical or 
logical method.  I am not saying that math and logic has no place in AGI or 
something like, that I am saying that we have to come up with other (effective) 
algorithms to deal with the problem of getting automated learning systems to 
use these algorithms effectively. Suppose that some composite source of data, 
which was expressed or could be derived into appropriate Bayesian form, was 
used to derive theories about the world.  Without knowing that the data source 
was a composite there is little that the Bayesian (theoretical or abstract) 
formula could (in itself) do to detect it.  So we have to rely on other ways to 
be able to find the composition of extracted streams of data (which 
superficially seems to be integral). But there are also other kinds of 
problems.  Most AGI data is not in appropriate Bayesian form.  So here the 
Bayesian-AGI guys would typically find a substitute characterization of a 
situation so that the data which could be expressed in Bayesian forms.  An 
analogous criticism can be directed at any mathematical or logical form used as 
a presumptive AGI method.  I am amazed at how far AI has advanced using these 
clunky models but it is clear to me that the fundamental failure of narrow 
methods to set as the base for AGI can be found right here. When we try to 
design an AGI program we are not locked into dealing with the IO data 
environment in just one way.  We can - and do - try various methods to enhance 
that data.  Most of the ways that are in common use are direct 
recharacterizations of the data.  They are not explicitly based on program 
acquired theoretical recharacterizations.  So if you are dealing with images 
you might try to increase the contrast or use a Gaussian method to try to 
detect the edges of the shapes in the image. Or, alternatively, you might 
imagine a neural network which is trained to detect edges.  These are not an 
acquired theoretical-recharacterizations because they do not rely on the AGI 
program to create its own theories (with or without outside influences) which 
it might apply to the problem.  When you start thinking of the problem from 
this basis a number of familiar parts of the problem-solving algorithms start 
to change.  The programmer does not have a clear distinction between the parts 
of the acquired theory since all the possible theories that might be acquired 
can not all be foreseen.  And particular data from an extraction of data taken 
from the IO data environment might be applied directly to parts of the 
extraction in one acquired (or learned) method but only indirectly in another 
acquired method. Suppose that an AGI program developed a 'theory' that it 
should use different video analysis methods when the light from the camera is 
bright and when it is dim.  This distinction might be derived from the static 
parts of the background of the scene as seen at different times of the day.  So 
the overall light from the static background might not be used as the direct 
object of further analysis (in this particular part of the algorithm) but as a 
conditional.  Now suppose the program subsequently realizes that this method 
does not always work. Perhaps some of the details of some image objects go into 
shadow and then come out and it might act on the observation of this apparent 
change to investigate it further.  At this point, parts of the static 
background might be used in a conditional and parts as comparative objects to 
compare with the target object (to compare shadows for example). Many people 
have pointed out that they do not think that babies create theories. Perhaps 
the phrase "acquired theoretical-recharacterization" does not accurately 
describe the kind of thing that I am thinking about.  I believe that human 
beings develop implicit theories, or theory-like objects of thought.  It was 
once said that Neural Networks work the way the mind work and you might say 
that neural networks develop implicit-theory-like relations.  And I believe 
that Bayesian Networks are also able to develop implicit-theory-like relations. 
 What is different in my theory of AGI is that these parts of the theory-like 
object and the implementation of the theory-like functions must in some cases 
be distinct and be open to precise activation by the artificial mind even if 
this internal operation might not be fully available to the mind at a level of 
meta-awareness. In this model, values or references may in some cases be 
combined, they might be combined indirectly, they might only be combined as 
distinct parts of a thought-object (or thought-like algorithm), or they might 
not be combinable. To the best of my memory I have never had this conversation 
with an enthusiast of weighted reasoning.  This lack of interest might be 
because I am working on an idea which is still new enough to be a little 
elusive or it might be due directly to the mistaken belief that weighted 
reasoning is the solution to inadequacy of discrete reasoning paradigms. Jim 
Bromer 
 
 
 
Date: Sat, 20 Jul 2013 12:30:41 -0500
From: [email protected]
To: [email protected]
Subject: Re: [agi] A Very Simple AGI Project


  
    
  
  
    On 7/20/2013 9:14 AM, Jim Bromer wrote:

    
    
      
      Text seems brittle because it was tried and it did
        not work.  But neither did visual, robotic, or
        other sensor-based AGI.  If the brittleness criticism was based
        on a lack of substantial achievement in spite of the
        effort, then the brittleness criticism would have to be applied
        to all AGI modalities.  Of course knowledge that is gathered
        only through text is going to be brittle in the sense that it
        would not be able to achieve the range of understanding that
        human beings can achieve, but the use of cell phones or
        robotics are not going to create genuine human experiences
        either.

         

        The only conclusion, based on the acceptance of a general lack
        of substantial advancement in the field, is that we do not have
        basic AGI because computers cannot achieve general intelligence
        or general intelligence needs even more advanced hardware than
        we have or there has been something important missing in AGI
        research.

         

        Something that Bayesian enthusiasts never talk about in these
        discussion groups is how can a mostly independent learning
        system make the distinction between those kinds of situations
        where Bayesian methods can be used to combine different sources
        of data from those cases where different sources of weighted
        values can't be combined or have to be combined in a certain
        way.
    
    

    I wonder if you could describe an example of what you mean here?

    

    As you may know, the Microsoft Troubleshooter uses a Bayesian
    approach...

  


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







____________________________________________________________
Moviefone - Official Site
Find the Latest Movie Showtimes and Your Nearest Theaters at Moviefone.
Moviefone.com

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to