Date: Sat, 29 Jun 2013 23:17:15 -0500
Subject: [agi] Probably Approximately Correct: Nature's Algorithms for Learning 
and Prospering in a Complex World [By Leslie Valiant]
From: [email protected]
To: [email protected]

Probably Approximately Correct: Nature's Algorithms for Learning and Prospering 
in a Complex World [By Leslie Valiant]
http://www.amazon.com/dp/B00BE650IQ/ref=cm_sw_r_tw_ask_bQunF.0CDBG0V




  
     
-------------------------------------------------------
I am just guessing about what the book is about based on the blurb, but the 
idea that we can muddle through without needing to understand what is going on 
is either poorly stated or nonsense.  Although our theories are usually pretty 
weak, they are none the less theories.  I do not believe that we are just 
basing our interest according to a coincidental correlation between three 
objects which can then be used to create chains and fences of correlations.  I 
believe that the imagination is extremely important both in discovering objects 
of interest and in generating theories to explain the mechanisms behind the 
objects.  That does not mean that we never rely on the linkages of ternary 
correlations it is just that a computational explanation of consciousness which 
goes that since a computer is not "conscious" of what it is doing then the 
potential for higher computational intelligence must prove that human beings 
are not "conscious" of what they are doing, just does not work for me.  We are 
conscious of some of what we do even if these theories are not very good ones.
 
One thing that I have been talking about for a number of years now is the 
importance of structural integration of concepts.  Even if our theories and 
knowledge about a subject of interest are not that great we can begin to 
develop different ways to think about the subject and then use these different 
vantages to begin building better responsive insight about the subject.  I 
think this can be done in AGI programs.  Weak theories do not (always) need to 
be disposed but their influence in deriving conclusions about a subject matter 
can be modified so that they are used when more appropriate for the conditions. 
 
 
- This is only one possible presentation about my theories of structural 
integration.  This is one way that I have to think about the subject.  Parts of 
this presentation should seem very familiar to people who have thought about 
the subject and I am sure that there are people who would seize on the part 
where I said, the "influence [of weak theories] in deriving conclusions about a 
subject matter are modified so that they can be used when more appropriate for 
the conditions," as referring to the exact same thing as they have thought 
about when they try to design AI methods capable of producing improvement over 
time or after training.  However, this effort to interpret what someone else 
says only on the basis of whether or not -I- have thought about things like 
this before can produce extremely insipid conclusions.  (I do it all the time 
so I am not claiming some kind of superiority.)  One reason my thoughts about 
this subject are a little different than the typical machine learning paradigm 
of learning-based improvement is that I explicitly emphasize the use of 
theories during learning.  I was not just talking about the theories of people 
(who programmed some learning mechanism) but about the theories that the AGI 
program might generate through artificial imagination.  So, had someone misread 
my statement believing that his machine learning theory was already imbued with 
a system where, 'conclusions about a subject matter were modified so that they 
would be used when more appropriate for the conditions,' he might have missed 
the point of my message entirely.  That is one of the most serious problems 
with egomaniac-driven theorization.  If you read everything only in the terms 
of how it is right or wrong according to your own theories you may end up 
missing central  points of some reasonable remarks.
 
So even though computers may not be conscious like we are I believe that we 
have to use meta-awareness in our AGI programs in order to make them act more 
reasonably.  The theories that they will generate may not be that great but by 
using conceptual structural integration I believe that it should be feasible 
for them to use imagination and reason to build better analysis and response 
methods.  So as they learn, some of their weak theories will be strengthened by 
making them more conditional and by extending their range of implementation 
slightly.  This is only one part of my structural integration theories.
 
Jim Bromer



      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to