Logan,

Thanks for your
comments. 

I
agree of course that concepts and concept integration may be represented by
words and sentences.  I was trying to say
that many of the complications that will arise using word-concepts will arise
using some other kinds of referential concepts. 
One of the reasons that I am convinced that text-only AGI is a good way
to go is because there is such a potential for expressiveness and
the representation of different kinds of ideas.  It is often difficult to 
express complicated
ideas using words because they are not substitutes for the implementations of
the things that we are talking about. 
However, that does not mean that they cannot be used as representations
of ideas.  I understand what I am talking
about even though other people do not.

 

I believe that when we acquire
a learned habit the parts of the habit may not be directly understandable but
can only be approached indirectly by referring to something else.  For instance 
a learned action may be created
by a string of action potentials (for a lack of a better name) and it may be
that the only way to detect the parts of the string is by noting the whole, more
complicated action.  Or we may infer the
action by some other action or other event that
is roughly correlated with the inferred action.  But essentially, when we are 
capable of
reflection (meta-cognition) we are able to ‘understand’ a concept potential if
we know something more about how to use and integrate the concept.  So by 
having some kind of understanding of a
concept potential we can consciously try to use it in different ways based on
some kind of reasoning.  Now, if are not
explicitly aware of the concept potential there may be a chance that we can
infer something about it indirectly just as we might infer something about an
action potential.

 

I believe that the
theory that it takes many statements to understand one simple statement has a
great deal of value.  Concepts are
relativistic.  That means that when a
simple concept is used in association with other concepts the meaning of the
concept can vary.  Concepts are
contextual.  But there are more problems.  Concepts are interdependent.  There 
is not (necessarily) an independent
concept and a dependent concept in a conceptual function the way there are in a
mathematical function.  So this means
that it can be very difficult to determine the meaning of a combination of 
concepts
if the program does not explicitly contain a reference to that particular
combination.  One way to work with this
problem is to rely on generalization systems in which the systems of
generalizations of a collection of concepts can be used to guide in the
decoding of a particular string of concepts which haven’t been seen before.  
However, when this was tried in the simplistic
fashion of the
discrete text based programs of the 60’s it did not produce intelligence.  So 
in the 70’s weighted reasoning became all
the rage because it looked like it might be used to infer subtle differences in
the strings that simple discretion substitution did not.  However, this promise 
did not hold up either.  Neither system have, in themselves, proven
sufficient to resolve the problem.  My
feeling is that the recognition that it takes many references to a concept to 
‘understand’
that concept is part of the key to resolving these problems without hoping to
rely on a method that suffers from combinatorial complexity.  Another part of 
the key is to recognize that
concept objects may contain numerous lateral similarities to other concept
objects and that these similarities may run across the dominant categories of a
concept object that is being examined.

 

Jim Bromer





  
 
Date: Sat, 20 Apr 2013 10:28:44 -0400
Subject: Re: [agi] Summary of My Current Theory For an AGI Program.
From: [email protected]
To: [email protected]




On Sat, Apr 13, 2013 at 6:39 AM, Jim Bromer <[email protected]> wrote:



Part 1


I feel that complexity is a major problem facing
contemporary AGI.  It is true, that for
most human reasoning we do not need to figure out complicated problems
precisely in order to take the first steps toward competency but so far AGI has
not been able to get very far beyond the narrow-AI barrier. 


I am going to start with a text-based AGI program.  I agree that more kinds of 
IO modalities
would make an effective AGI program better. 
However, I am not aware of any evidence that sensory-based AGI or
multi-modal sensory based AGI or robotic based AGI has been able to achieve
something greater than other efforts. The core of AGI is not going to be found
in the peripherals.  And it is clear that
starting with complicated IO accessories would make AGI programming more 
difficult.  It seems obvious that IO is necessary for
AI/AGI and this abstraction is a probably more appropriate basis for the
requirements of AGI.


My AGI program is going to be based on discreet references. I
feel that the argument that only neural networks are able to learn or are able
to incorporate different kinds of data objects into an associative field is not
accurate. I do, however, feel that more attention needs to be paid to concept
integration.  And I think that many of us
recognize that a good AGI model is going to create an internal reference model
that is a kind of network.  The discreet
reference model more easily allows the program to retain the components of an
agglomeration in a way in which the traditional neural network does not.  This 
means that it is more likely that the
parts of an associative agglomeration can be detected.  On the other hand, 
since the program will
develop its own internal data objects, these might be formed in such a way so
that the original parts might be difficult to detect. With a more conscious
effort to better understand concept integration I think that the discreet 
conceptual
network model will prove itself fairly easily.

Yep, so can do concept integration, by representing concepts with sentences.
It works, it's simple, it can show association, it maintains original parts. 


 


I am going to use weighted reasoning and probability but
only to a limited extent.



On Sat, Apr 13, 2013 at 7:34 AM, Jim Bromer <[email protected]> wrote:



Part 2


I believe that it takes a great deal of knowledge to 'understand'
one thing.  A statement has to be
integrated into a greater collection of knowledge in order for the relations of
understanding to be formed. 
Just like how a sentence can be integrated into a story.
 
 And the
knowledge of a single statement has to be integrated into a greater field of
knowledge concerning the central features of the subject for the intelligent
entity to truly understand the statement.  While conceptual integration, by 
some name,
has always been a primary subject in AI/AGI, I think it was relegated to a
subservient position by those who originally stressed the formal methods of 
logic,
linguistics, psychology, numerics, probability, and neural networks.  Thinking 
that the details of how ideas work in
actual thinking was either part of some predawn-of-science-philosophy or the
turn-of-the-crank production of the successful application of formal methods, a
focus on the details of how ideas work in actual problems was seen as
naïve. 

Many people understand things through story. Indeed it is the way in which our 
brains are designed to operate and interpret new information, since the cave 
paintings at the very least. 


It's also the most effective way of transmitting information from one person to 
another, as it often bypasses much of the conscious criticality, and simply 
subsumes into the subconscious background. 


Even computers understand things through story,  even thought the typical 
programming language may make this hard to see, however the setting or 
variables are declared intially, the rising action is the preperation of the 
variables for interaction, then the conflict or change is the actual 
transmutation of the variables, and resolution is the returning of the result. 

  This problem, where the smartest
thinkers would spend lives pursuing the abstract problems without wasting their
time carefully examining many real world cases occurs often in science.  It is 
amplified by ignorance.  If no one knows how to create a practical
application then the experts in the field may become overly pre-occupied with
the proposed formal methods that had been presented to them.  Formal methods 
are important - but they are each
only one kind of thing.  It takes a great
deal of knowledge about many different things to 'understand' one kind of
thing.  A reasonable rule of thumb is
that formal methods have to be tried and shaped based on exhaustive
applications of the methods to real world problems.


In order to integrate new knowledge the new idea that is
being introduced usually has to be verified using many steps to show that it
holds. 

well there is always parsing, and compiling, if it works it works.  Though 
factual information about the world could be statistically cross-referenced. 

 
 Since there is no absolute
insight into truth for this kind of thing, knowledge has to be integrated in a
more thorough trial and error manner.

truth is personal experience, so each perspective has it's own truth. 
knowledge is past-experience, there is true knowledge, and real knowledge.

real is the set of beliefs that are in common amongst a group.
exist is anything that can be imagined/compiled. 
 

 
The program has to create new theories about statements or reactions it
is considering.  This would extend to
interpretations of observations for systems where other kinds of sensory
systems were used.  A single experiment
does not 'prove' a new theory in science. 
A large number of experiments are required and most of those experiments
have to demonstrate that the application of the theory can lead to better 
understanding
of other related effects. 

you know it all depends, are we making little scientists, or AGI's? 
Is there purpose to "prove" stuff, or rather simply to "do" stuff. 

Sure they could use some scientific method, and statistical verification,
however it's more important to actually get stuff done i.e. result of 
experiment, than have it published in a peer-reviewed journal, or get a bunch 
of peers to re-do the experiments.

The experiment and results could be shared with other AGI's and people over the 
internet, likely in the form of a story,   if others come across similar issues 
they may wish to try it themselves, and comment if it works out for them.

  It takes a
knowledge of a great many things to verify a statement about one thing.  In 
order for the knowledge represented by a
statement to be verified and comprehended it has to be related to, and
integrated with, a great many other statements concerning the primary subject
matter.  It is necessary to see how the
primary subject matter may be used in many different kinds of thoughts to be
able to understand it.

I disagree,   as you don't have to know all the ways to add stuff, to simply 
add some numbers together. 
You can easily learn new things later on, like how to perform addition amongst 
new types of things, like for instance arrays, or ingredients. 

Gotta start somewhere, and arithmetic addition is a sufficient place to do so.
 



 





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  



                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to