Part 2

I believe that it takes a great deal of knowledge to 'understand' one thing.
A statement has to be integrated into a greater collection of knowledge in
order for the relations of understanding to be formed.  And the knowledge
of a single statement has to be integrated into a greater field of
knowledge concerning the central features of the subject for the
intelligent entity to truly understand the statement.  While conceptual
integration, by some name, has always been a primary subject in AI/AGI, I
think it was relegated to a subservient position by those who originally
stressed the formal methods of logic, linguistics, psychology, numerics,
probability, and neural networks.  Thinking that the details of how ideas
work in actual thinking was either part of some
predawn-of-science-philosophy or the turn-of-the-crank production of the
successful application of formal methods, a focus on the details of how
ideas work in actual problems was seen as naïve.  This problem, where the
smartest thinkers would spend lives pursuing the abstract problems without
wasting their time carefully examining many real world cases occurs often
in science.  It is amplified by ignorance.  If no one knows how to create a
practical application then the experts in the field may become overly
pre-occupied with the proposed formal methods that had been presented to
them.  Formal methods are important - but they are each only one kind of
thing.  It takes a great deal of knowledge about many different things to
'understand' one kind of thing.  A reasonable rule of thumb is that formal
methods have to be tried and shaped based on exhaustive applications of the
methods to real world problems.

In order to integrate new knowledge the new idea that is being introduced
usually has to be verified using many steps to show that it holds.  Since
there is no absolute insight into truth for this kind of thing, knowledge
has to be integrated in a more thorough trial and error manner.  The
program has to create new theories about statements or reactions it is
considering.  This would extend to interpretations of observations for
systems where other kinds of sensory systems were used.  A single
experiment does not 'prove' a new theory in science.  A large number of
experiments are required and most of those experiments have to demonstrate
that the application of the theory can lead to better understanding of
other related effects.  It takes a knowledge of a great many things to
verify a statement about one thing.  In order for the knowledge represented
by a statement to be verified and comprehended it has to be related to, and
integrated with, a great many other statements concerning the primary
subject matter.  It is necessary to see how the primary subject matter may
be used in many different kinds of thoughts to be able to understand it.


On Sat, Apr 13, 2013 at 6:39 AM, Jim Bromer <[email protected]> wrote:

> Part 1
>
> I feel that complexity is a major problem facing contemporary AGI.  It is
> true, that for most human reasoning we do not need to figure out
> complicated problems precisely in order to take the first steps toward
> competency but so far AGI has not been able to get very far beyond the
> narrow-AI barrier.
>
> I am going to start with a text-based AGI program.  I agree that more
> kinds of IO modalities would make an effective AGI program better.  However,
> I am not aware of any evidence that sensory-based AGI or multi-modal
> sensory based AGI or robotic based AGI has been able to achieve something
> greater than other efforts. The core of AGI is not going to be found in the
> peripherals.  And it is clear that starting with complicated IO
> accessories would make AGI programming more difficult.  It seems obvious
> that IO is necessary for AI/AGI and this abstraction is a probably more
> appropriate basis for the requirements of AGI.
>
> My AGI program is going to be based on discreet references. I feel that
> the argument that only neural networks are able to learn or are able to
> incorporate different kinds of data objects into an associative field is
> not accurate. I do, however, feel that more attention needs to be paid to
> concept integration.  And I think that many of us recognize that a good
> AGI model is going to create an internal reference model that is a kind of
> network.  The discreet reference model more easily allows the program to
> retain the components of an agglomeration in a way in which the traditional
> neural network does not.  This means that it is more likely that the
> parts of an associative agglomeration can be detected.  On the other
> hand, since the program will develop its own internal data objects, these
> might be formed in such a way so that the original parts might be difficult
> to detect. With a more conscious effort to better understand concept
> integration I think that the discreet conceptual network model will prove
> itself fairly easily.
>
> I am going to use weighted reasoning and probability but only to a limited
> extent.
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to