Valient's theory of PAC learning.
http://en.wikipedia.org/wiki/Probably_approximately_correct_learning


On Sun, Jun 30, 2013 at 3:08 PM, Juan Carlos Kuri Pinto <[email protected]>wrote:

> There are 2 approaches for programming AI:
>
> 1. Reductionist AI, in which the programmer hardwires a reductionist
> solution to a specific kind of problems. This approach is brittle when you
> change the kind of problems to solve. This is what Narrow AI is all about.
>
> 2. Holistic AI, in which the meta-programmer meta-programs a learning
> network capable of adapting its topology to fit the causal hyper-geometries
> of all kinds of problems. In other words, the Holistic AI system does the
> reduction process the programmer is supposed to do in the Narrow AI
> approach. This is what General AI is all about. Holistic AI is the approach
> of Monica Anderson and me.
>
> The book "Probably Approximately Correct: Nature's Algorithms for Learning
> and Prospering in a Complex World" is about Holistic AI.
>
>
> On Sun, Jun 30, 2013 at 8:34 AM, Jim Bromer <[email protected]> wrote:
>
>>
>>
>> ------------------------------
>> Date: Sat, 29 Jun 2013 23:17:15 -0500
>> Subject: [agi] Probably Approximately Correct: Nature's Algorithms for
>> Learning and Prospering in a Complex World [By Leslie Valiant]
>> From: [email protected]
>> To: [email protected]
>>
>>
>> Probably Approximately Correct: Nature's Algorithms for Learning and
>> Prospering in a Complex World [By Leslie Valiant]
>> http://www.amazon.com/dp/B00BE650IQ/ref=cm_sw_r_tw_ask_bQunF.0CDBG0V
>>
>> -------------------------------------------------------
>> I am just guessing about what the book is about based on the blurb, but
>> the idea that we can muddle through without needing to understand what is
>> going on is either poorly stated or nonsense.  Although our theories are
>> usually pretty weak, they are none the less theories.  I do not believe
>> that we are just basing our interest according to a coincidental
>> correlation between three objects which can then be used to create chains
>> and fences of correlations.  I believe that the imagination is extremely
>> important both in discovering objects of interest and in generating
>> theories to explain the mechanisms behind the objects.  That does not mean
>> that we never rely on the linkages of ternary correlations it is just
>> that a computational explanation of consciousness which goes that since a
>> computer is not "conscious" of what it is doing then the potential for
>> higher computational intelligence must prove that human beings are not
>> "conscious" of what they are doing, just does not work for me.  We are
>> conscious of some of what we do even if these theories are not very good
>> ones.
>>
>> One thing that I have been talking about for a number of years now is the
>> importance of structural integration of concepts.  Even if our theories and
>> knowledge about a subject of interest are not that great we can begin to
>> develop different ways to think about the subject and then use these
>> different vantages to begin building better responsive insight about the
>> subject.  I think this can be done in AGI programs.  Weak theories do not
>> (always) need to be disposed but their influence in deriving conclusions
>> about a subject matter can be modified so that they are used when more
>> appropriate for the conditions.
>>
>> - This is only one possible presentation about my theories of structural
>> integration.  This is one way that I have to think about the subject.
>> Parts of this presentation should seem very familiar to people who have
>> thought about the subject and I am sure that there are people who would
>> seize on the part where I said, the "influence [of weak theories] in
>> deriving conclusions about a subject matter are modified so that they can
>> be used when more appropriate for the conditions," as referring to the
>> exact same thing as they have thought about when they try to
>> design AI methods capable of producing improvement over time or after
>> training.  However, this effort to interpret what someone else says only on
>> the basis of whether or not -I- have thought about things like this before
>> can produce extremely insipid conclusions.  (I do it all the time so I am
>> not claiming some kind of superiority.)  One reason my thoughts about this
>> subject are a little different than the typical machine learning paradigm
>> of learning-based improvement is that I explicitly emphasize the use of
>> theories during learning.  I was not just talking about the theories
>> of people (who programmed some learning mechanism) but about the theories
>> that the AGI program might generate through artificial imagination.  So,
>> had someone misread my statement believing that his machine learning theory
>> was already imbued with a system where, 'conclusions about a subject matter
>> were modified so that they would be used when more appropriate for the
>> conditions,' he might have missed the point of my message entirely.  That
>> is one of the most serious problems with egomaniac-driven theorization.  If
>> you read everything only in the terms of how it is right or wrong according
>> to your own theories you may end up missing central  points of some
>> reasonable remarks.
>>
>> So even though computers may not be conscious like we are I believe that
>> we have to use meta-awareness in our AGI programs in order to make them act
>> more reasonably.  The theories that they will generate may not be that
>> great but by using conceptual structural integration I believe that it
>> should be feasible for them to use imagination and reason to build better
>> analysis and response methods.  So as they learn, some of their weak
>> theories will be strengthened by making them more conditional and by
>> extending their range of implementation slightly.  This is only one part of
>> my structural integration theories.
>>
>> Jim Bromer
>>  <http://www.listbox.com>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23601136-e0982844> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/3701026-786a0853> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
-- Matt Mahoney, [email protected]



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to