No one has yet created an AGI program (a holistic program that is capable of 
true learning) so the argument that some theory is holistic therefore it is 
adequate is just not necessarily the case.  Perhaps with some other 
advancements in computer science these methods might turn out to be adequate.  
I might be wrong.
 
My point of view is that methods which are hyper abstract tend to be methods in 
which some overly broad method is used in the dream that it would be feasible, 
given enough computing power and a few details that need to be worked out.  But 
any AGI paradigm (one that is holistic and capable of some genuine learning) 
could be said to be hyper-abstract in the way that I meant it.  
 
So then I have to find a better way to construct my criticism of the supposed 
adequacy of some hyper-abstract method.  The only way I could do that is by 
explaining that a hyper-abstract method (like relying on weighted reasoning) 
that is too limited will produce limited results.  What we have seen so far is 
that even if someone has an AGI program that works really well for some species 
of problems it always turns that it is inadequate to demonstrate recognizable 
continued general learning.  (The AGI programs that work are never 
incrementally scalable.  They can work on sub-classes of problems but they are 
not capable of the conceptual integration that is so obvious in human beings.)
 
So I do agree that we use theories which are approximately correct. But I 
disagree, for example, with a theory that details all learning as a reduction 
of the range of error of some numerical approximation that represents some 
knowledge.
 
Does your theory see all learning as some kind of numerical evaluation problem? 
 If you think that then I would say that even though your theory might be 
demonstrably general (it can be applied to a variety of problems), it is not 
adequately general (it does not demonstrate continued learning beyond those 
sub-species of problems that it works well on).
 
It may turn out that with enough computing power an great many AGI paradigms 
could be finished to demonstrate true general learning that can actually get 
off the ground.  But even if that is the case it would still probably turn out 
that there would be something more that is needed to make the demo.  I am 
interested in discovering what those missing parts are.
 
Jim Bromer
 
Date: Sun, 30 Jun 2013 14:08:59 -0500
Subject: Re: [agi] Probably Approximately Correct: Nature's Algorithms for 
Learning and Prospering in a Complex World [By Leslie Valiant]
From: [email protected]
To: [email protected]

There are 2 approaches for programming AI:
1. Reductionist AI, in which the programmer hardwires a reductionist solution 
to a specific kind of problems. This approach is brittle when you change the 
kind of problems to solve. This is what Narrow AI is all about.

2. Holistic AI, in which the meta-programmer meta-programs a learning network 
capable of adapting its topology to fit the causal hyper-geometries of all 
kinds of problems. In other words, the Holistic AI system does the reduction 
process the programmer is supposed to do in the Narrow AI approach. This is 
what General AI is all about. Holistic AI is the approach of Monica Anderson 
and me.

The book "Probably Approximately Correct: Nature's Algorithms for Learning and 
Prospering in a Complex World" is about Holistic AI.


On Sun, Jun 30, 2013 at 8:34 AM, Jim Bromer <[email protected]> wrote:





 
Date: Sat, 29 Jun 2013 23:17:15 -0500
Subject: [agi] Probably Approximately Correct: Nature's Algorithms for Learning 
and Prospering in a Complex World [By Leslie Valiant]

From: [email protected]
To: [email protected]

Probably Approximately Correct: Nature's Algorithms for Learning and Prospering 
in a Complex World [By Leslie Valiant]

http://www.amazon.com/dp/B00BE650IQ/ref=cm_sw_r_tw_ask_bQunF.0CDBG0V





  
     
-------------------------------------------------------
I am just guessing about what the book is about based on the blurb, but the 
idea that we can muddle through without needing to understand what is going on 
is either poorly stated or nonsense.  Although our theories are usually pretty 
weak, they are none the less theories.  I do not believe that we are just 
basing our interest according to a coincidental correlation between three 
objects which can then be used to create chains and fences of correlations.  I 
believe that the imagination is extremely important both in discovering objects 
of interest and in generating theories to explain the mechanisms behind the 
objects.  That does not mean that we never rely on the linkages of ternary 
correlations it is just that a computational explanation of consciousness which 
goes that since a computer is not "conscious" of what it is doing then the 
potential for higher computational intelligence must prove that human beings 
are not "conscious" of what they are doing, just does not work for me.  We are 
conscious of some of what we do even if these theories are not very good ones.

 
One thing that I have been talking about for a number of years now is the 
importance of structural integration of concepts.  Even if our theories and 
knowledge about a subject of interest are not that great we can begin to 
develop different ways to think about the subject and then use these different 
vantages to begin building better responsive insight about the subject.  I 
think this can be done in AGI programs.  Weak theories do not (always) need to 
be disposed but their influence in deriving conclusions about a subject matter 
can be modified so that they are used when more appropriate for the conditions. 
 

 
- This is only one possible presentation about my theories of structural 
integration.  This is one way that I have to think about the subject.  Parts of 
this presentation should seem very familiar to people who have thought about 
the subject and I am sure that there are people who would seize on the part 
where I said, the "influence [of weak theories] in deriving conclusions about a 
subject matter are modified so that they can be used when more appropriate for 
the conditions," as referring to the exact same thing as they have thought 
about when they try to design AI methods capable of producing improvement over 
time or after training.  However, this effort to interpret what someone else 
says only on the basis of whether or not -I- have thought about things like 
this before can produce extremely insipid conclusions.  (I do it all the time 
so I am not claiming some kind of superiority.)  One reason my thoughts about 
this subject are a little different than the typical machine learning paradigm 
of learning-based improvement is that I explicitly emphasize the use of 
theories during learning.  I was not just talking about the theories of people 
(who programmed some learning mechanism) but about the theories that the AGI 
program might generate through artificial imagination.  So, had someone misread 
my statement believing that his machine learning theory was already imbued with 
a system where, 'conclusions about a subject matter were modified so that they 
would be used when more appropriate for the conditions,' he might have missed 
the point of my message entirely.  That is one of the most serious problems 
with egomaniac-driven theorization.  If you read everything only in the terms 
of how it is right or wrong according to your own theories you may end up 
missing central  points of some reasonable remarks.

 
So even though computers may not be conscious like we are I believe that we 
have to use meta-awareness in our AGI programs in order to make them act more 
reasonably.  The theories that they will generate may not be that great but by 
using conceptual structural integration I believe that it should be feasible 
for them to use imagination and reason to build better analysis and response 
methods.  So as they learn, some of their weak theories will be strengthened by 
making them more conditional and by extending their range of implementation 
slightly.  This is only one part of my structural integration theories.

 
Jim Bromer



      
    
  
                                          



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to