Part 6


An AGI program has to be
relativistic.  A concept may take on a
stable meaning in the program, (at least I hope they will) but the basis for
the meaning and the application of a concept will be dependent on how it
relates to other concepts. This means that while concepts will be defined in
the terms of other concepts there does not have to be some set of concepts
which serve as the fundamentals from which all the other concepts are defined.  
There are dependent concepts but there are no
fixed set of independent concepts so to speak. 
(Some might exist for a while but they could subsequently be defined
relative to some other concept.)  It has
to be possible for an AGI program to learn more about a subject matter so any
concept might be further defined or redefined at some later time.  Now, using 
this as a simple example, if concept
A is defined relative to some other concept B and concept B is later redefined 
so
that it becomes dependent on Concept C does this mean that all of the new
insights which were related to concept B should be passed on in regards to the
definition of Concept A?  No, of course
not.  Or, I mean, not necessarily.  In some cases Concept C, the new insights
about Concept B, would be relevant in the definition of Concept A but other
cases they would not be.  So then how
would the AGI program be able to decide which aspects of Concept C are relevant
to the definition of Concept A? There would be a number of ways.  As I said, to 
understand one thing (one small
idea) the program needs to have knowledge about how it relates to many things.  
So to know whether some fact in Concept C concerning
Concept B is relevant to Concept A it has to have some other kind of knowledge
about the relation.  This might be found
through a specific idea that directly relates Concept B and Concept A and 
Concept
C.  Or it could be inferred from generalizations
that the Concepts belong to.  Or it might
be inferred through other kinds of relations concerning the concepts.  Since 
the program will have an artificial imagination
the inferences could get quite imaginative. 
So this case, where a relatively independent concept of definition is
redefined and becomes a dependent concept, is similar to any other case where
the program has to learn something new and it finds that it can make some
inferences about the new idea as it tries to fit it into the preexisting
insights that had already acquired.  It
has to be examined through a trial and error process to see if the inference
can explain something in a more powerful way. 
And if that explanation can be tied into something that it has or will observe
in the Input-Output data fields then that will act as a positive reinforcement 
for the new insight.

Jim Bromer
 
> Date: Tue, 16 Apr 2013 19:33:06 -0400
> Subject: Fwd: Summary of My Current Theory For an AGI Program.
> From: [email protected]
> To: [email protected]
> 
> ---------- Forwarded message ----------
> From: Jim Bromer <[email protected]>
> Date: Mon, Apr 15, 2013 at 11:40 AM
> Subject: Re: Summary of My Current Theory For an AGI Program.
> To: [email protected]
> 
> 
> Part 5
> 
> Gradual methods seem to be called for.  However, by utilizing
> structural verification and integration, the gradual method can be
> augmented by structural advancements where key pieces of knowledge
> seem to be able to better explain a variety of related fragments of
> knowledge.  Of course even these methods are not absolute so there
> will always be the problem of inaccurate knowledge being mixed in with
> the good.  One of the key problems with contemporary AGI is that
> ineffective knowledge (in some form) will interfere with the effort to
> build even the foundations for an AGI program.  Since I do not believe
> that there is any method that will work often enough to allow for a
> solid foundation to be easily formed, a way to work with and around
> inaccurate and inadequate knowledge has to be found.  Structural
> integration can sometimes enhance a cohesive bunch of inaccurate
> fragments of knowledge.  But I believe that there are a few things
> that can be done to deal with this problem.  First of all, the method
> of (partial) verification through structural knowledge should usually
> work better with effective fragments of knowledge then it would with
> inaccurate fragments.  Secondly, a few kinds of flaws can often be
> found in inaccurate theories.  One is that they are often 'circular'
> or what I call 'loopy'.  Although good paradigms (mini-paradigms) are
> often strongly interdependent, nonsensical paradigms do not fit well
> into systems external to the central features of the paradigm.  This
> fitting can be done through the cross-categorization networks across
> transcendent boundaries and it is an important part of understanding
> how good theories work.  The idea of the transcendent boundary is a
> solvent for the fact that we don't really form our understanding of
> the world based on perfect logic.  So by being able to examine
> cross-categorical relations we should be able to deal with small
> logical systems that can be related to other small systems even though
> they may not be perfectly integrated.  I think most interested people
> should be able to get some idea of what I am saying about this problem
> and they should be able to find examples of methods to find flaws in
> simple systems of theories from real life.  But there is another
> problem that my theory of the transcendent boundary system would tend
> to create.  It would be pretty easy to build small systems that
> overlay an 'insightful' bounded system and these could even be
> integrated with other transcendent systems that were built to overlay
> other insightful bounded systems.  So a well developed fantasy system
> could be created on top of related insightful systems using the
> methods that I have in mind.  This problem does have a solution.
> These systems which overlay the insightful systems can be carefully
> examined to see if a viable method to tie these into some IO
> observations that are directly related to the insightful systems could
> be created.  If a transcendent system is truly insightful, it should
> typically be useful in explaining and predicting some basic
> observations.  Of course systems like this are not perfect and during
> the initial stages of learning the program might create some elaborate
> systems of nonsense.  And an exhaustive search for inaccurate theories
> can interfere with learning since inaccuracies that do not play key
> roles for paradigms can act to support the weight of the paradigm
> while the 'student' is first learning.  For instance, the good student
> will be aware that the fact that he does not fully understand the
> supporting structures (and transcendent relations) of a paradigm does
> not mean that he can use his ignorance to knock the theory down.
> Similarly, the fantasy that a system (like an axiomatic system) is
> sufficient to support an application of the system would not ruin that
> student's work with system unless he tried to apply it to a field
> where the naive application was not effective (like trying to use
> traditional logic to produce AGI).
> 
> 
> 
> 
> 
> On Sun, Apr 14, 2013 at 11:14 PM, Jim Bromer <[email protected]> wrote:
> >
> > Part 4
> >
> > Artificial imagination is also necessary for AGI.  Imagination can take 
> > place simply by creating associations between concepts but obviously the 
> > best forms of imagination are going to be based on rational meaningfulness. 
> >  An association between concepts or (concept objects) which cannot be 
> > interpreted as meaningful is not usually very useful. So it seems that if 
> > the relationship is both imaginative and potentially meaningful it would be 
> > advantageous.  An association formed by a categorical substitution is more 
> > likely to be meaningful so I consider this a rational form of imagination.  
> > However, you can find many examples where a categorical substitution does 
> > not produce a meaningful association, so perhaps my claim that it is a 
> > rational process is dependent on the likelihood that the process will turn 
> > up a greater proportion of meaningful relations than purely random 
> > associations.  Some imaginative relations may exist just as entertainment, 
> > but I believe that the application of the imagination is one of the more 
> > important steps toward understanding.  In fact, I believe that all 
> > understanding is essentially a form of imaginative projection, where you 
> > project previously formed ideas onto an ongoing situation which is 
> > recognized or thought to share some characteristics with the projected 
> > ideas.  So from this point of view, the reliance of previously learned 
> > knowledge is really an application of the imagination.  Perhaps it is a 
> > special form of imagination but the imagination none the less.  Anyway, 
> > once an imaginative association or relation is created it has to be tested. 
> > I feel that relations of understanding cannot be appreciated out of 
> > context.  The basic rule of thumb is that it takes knowledge of many things 
> > to understand one thing.  This creates a problem when trying to test or 
> > validate an insight which was partially produced by the imagination or 
> > which had to be fitted using imaginative projection.  The only way an AGI 
> > program is going to be able to validate a new idea is by seeing how well it 
> > fits and how well it works in a variety of related contexts.  This is what 
> > I call a structural integration.  It not only represents a single concept 
> > but it also carries a lot of other information with it that can seemingly 
> > explain a lot of other small facts as well.  A new idea seems to make sense 
> > if it fits in with a number of insights that were previously acquired.
> >
> >
> >
> >
> >
> > On Sun, Apr 14, 2013 at 3:30 PM, Jim Bromer <[email protected]> wrote:
> >>
> >> Part 3
> >>
> >> The program will make extensive use of generalizations and 
> >> cross-generalization. The program will need to be able to discover 
> >> abstractions.  These abstractions typically may be used to develop 
> >> generalizations. A generalization may be formed from a group in which all 
> >> the members share some common characteristics. However, generalizations 
> >> may also be formed by various arbitrary processes. And, if the program 
> >> works, generalizations may be formed in response to some educational 
> >> instruction.  The most typical example of cross-generalization may be the 
> >> consideration of similarities across individual systems of taxonomies or 
> >> classes or subclasses.  In this broad definition of generalization, the 
> >> collections do not have to be grouped by any common characteristic and the 
> >> same can go for cross-categorizations.  Although this might be a misuse of 
> >> the term generalization, the generalizations that my program will create 
> >> may not be trees because they can potentially branch off in different 
> >> directions.  Indexes into data for internal searches may be formed in a 
> >> similar way but I will have to think about whether the variety of 
> >> branching makes sense as I am developing the program.  I believe that 
> >> because of the variety of forms of generalization or categorization that 
> >> the program will use it is necessary for the program to keep track of the 
> >> different kinds of categorization and generalization that it develops.  
> >> And it will put transcendent boundaries around portions of the 
> >> generalizations that it develops as it uses them in particular ways. These 
> >> boundaries are transcendent in that overlapping relations may be 
> >> considered across them (as in cross-generalization or 
> >> cross-categorization). Perhaps the terms relations and categorization are 
> >> more abstract than the terms of generalization.  So the program will be 
> >> able to develop abstractions of relations and then build categorizations 
> >> from these relations.  The categories that I have in mind may be somewhat 
> >> free-wheeling.  Cross-categorization will be important because they will 
> >> help the program find and consider similarities across the categorical 
> >> structures. These categorical structures may need to be bounded, but since 
> >> bounded categories may still be related across a relatively dominant 
> >> categorical relation that means that they can be transcended by other 
> >> associative relations.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Sat, Apr 13, 2013 at 7:34 AM, Jim Bromer <[email protected]> wrote:
> >>>
> >>> Part 2
> >>>
> >>> I believe that it takes a great deal of knowledge to 'understand' one 
> >>> thing.  A statement has to be integrated into a greater collection of 
> >>> knowledge in order for the relations of understanding to be formed.  And 
> >>> the knowledge of a single statement has to be integrated into a greater 
> >>> field of knowledge concerning the central features of the subject for the 
> >>> intelligent entity to truly understand the statement.  While conceptual 
> >>> integration, by some name, has always been a primary subject in AI/AGI, I 
> >>> think it was relegated to a subservient position by those who originally 
> >>> stressed the formal methods of logic, linguistics, psychology, numerics, 
> >>> probability, and neural networks.  Thinking that the details of how ideas 
> >>> work in actual thinking was either part of some 
> >>> predawn-of-science-philosophy or the turn-of-the-crank production of the 
> >>> successful application of formal methods, a focus on the details of how 
> >>> ideas work in actual problems was seen as naïve.  This problem, where the 
> >>> smartest thinkers would spend lives pursuing the abstract problems 
> >>> without wasting their time carefully examining many real world cases 
> >>> occurs often in science.  It is amplified by ignorance.  If no one knows 
> >>> how to create a practical application then the experts in the field may 
> >>> become overly pre-occupied with the proposed formal methods that had been 
> >>> presented to them.  Formal methods are important - but they are each only 
> >>> one kind of thing.  It takes a great deal of knowledge about many 
> >>> different things to 'understand' one kind of thing.  A reasonable rule of 
> >>> thumb is that formal methods have to be tried and shaped based on 
> >>> exhaustive applications of the methods to real world problems.
> >>>
> >>> In order to integrate new knowledge the new idea that is being introduced 
> >>> usually has to be verified using many steps to show that it holds.  Since 
> >>> there is no absolute insight into truth for this kind of thing, knowledge 
> >>> has to be integrated in a more thorough trial and error manner.  The 
> >>> program has to create new theories about statements or reactions it is 
> >>> considering.  This would extend to interpretations of observations for 
> >>> systems where other kinds of sensory systems were used.  A single 
> >>> experiment does not 'prove' a new theory in science.  A large number of 
> >>> experiments are required and most of those experiments have to 
> >>> demonstrate that the application of the theory can lead to better 
> >>> understanding of other related effects.  It takes a knowledge of a great 
> >>> many things to verify a statement about one thing.  In order for the 
> >>> knowledge represented by a statement to be verified and comprehended it 
> >>> has to be related to, and integrated with, a great many other statements 
> >>> concerning the primary subject matter.  It is necessary to see how the 
> >>> primary subject matter may be used in many different kinds of thoughts to 
> >>> be able to understand it.
> >>>
> >>>
> >>>
> >>> On Sat, Apr 13, 2013 at 6:39 AM, Jim Bromer <[email protected]> wrote:
> >>>>
> >>>> Part 1
> >>>>
> >>>> I feel that complexity is a major problem facing contemporary AGI.  It 
> >>>> is true, that for most human reasoning we do not need to figure out 
> >>>> complicated problems precisely in order to take the first steps toward 
> >>>> competency but so far AGI has not been able to get very far beyond the 
> >>>> narrow-AI barrier.
> >>>>
> >>>> I am going to start with a text-based AGI program.  I agree that more 
> >>>> kinds of IO modalities would make an effective AGI program better.  
> >>>> However, I am not aware of any evidence that sensory-based AGI or 
> >>>> multi-modal sensory based AGI or robotic based AGI has been able to 
> >>>> achieve something greater than other efforts. The core of AGI is not 
> >>>> going to be found in the peripherals.  And it is clear that starting 
> >>>> with complicated IO accessories would make AGI programming more 
> >>>> difficult.  It seems obvious that IO is necessary for AI/AGI and this 
> >>>> abstraction is a probably more appropriate basis for the requirements of 
> >>>> AGI.
> >>>>
> >>>> My AGI program is going to be based on discreet references. I feel that 
> >>>> the argument that only neural networks are able to learn or are able to 
> >>>> incorporate different kinds of data objects into an associative field is 
> >>>> not accurate. I do, however, feel that more attention needs to be paid 
> >>>> to concept integration.  And I think that many of us recognize that a 
> >>>> good AGI model is going to create an internal reference model that is a 
> >>>> kind of network.  The discreet reference model more easily allows the 
> >>>> program to retain the components of an agglomeration in a way in which 
> >>>> the traditional neural network does not.  This means that it is more 
> >>>> likely that the parts of an associative agglomeration can be detected.  
> >>>> On the other hand, since the program will develop its own internal data 
> >>>> objects, these might be formed in such a way so that the original parts 
> >>>> might be difficult to detect. With a more conscious effort to better 
> >>>> understand concept integration I think that the discreet conceptual 
> >>>> network model will prove itself fairly easily.
> >>>>
> >>>> I am going to use weighted reasoning and probability but only to a 
> >>>> limited extent.
> >>>
> >>>
> >>
> >
                                                                                
  


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to