I forgot about conceptual structure itself.  Conceptual structure is based
on the idea that structure in language is vital to understanding language,
and that structure in ideas must also be understood to understand the
ideas.  For instance temporal structure is often important and so is
positional structure.  But when you think about it these two kinds of
relationships are only concepts.  While they seem to have a wide
application to many different kinds of things they are still only
concepts.  This shows that concepts may play different kinds of roles when
used with other concepts.  This insight seems obvious to me but it also
seems obviously important.  If you can find that certain concepts can take
on the role of an abstracting or generalizing agent then doesn't this imply
that other concepts might also take on roles that go beyond their surface
characteristics?  For example, the position of an object is what it is.  To
recognize that position and relative position might be used to create
highly generalized principles that have advanced mankind's understanding of
matter and technology is to recognize that a seemingly dull feature of a
concept can be used as an agent of insight.  So then I am saying that by
exploring the roles and structures of concepts I expect to find other
activating principles of insight that may have eluded us so far.
Jim Bromer

On Sun, Oct 7, 2012 at 3:46 PM, Jim Bromer <[email protected]> wrote:

> The schema theory is the closest of the older theories to my ideas.  Since
> the utilization of the schema theory is not strictly defined my theory can
> be seen as being derived from it.  I have more of an idea of how it would
> be used in a proto-AGI program and that is what makes my theory different
> from the general schema theory.  I would/will put a lot of demands on the
> conceptual networks.  One of the problems with my theory is that it is too
> complicated.  For example, I don't want to rely on some predefined
> grammatical rules so I will demand that the network solve both the problems
> of conceptual reference and the problems of communicating what it is
> 'thinking' about.
>
> The traditional schema theory was based on a representation of a system of
> characteristics of a central concept that were defined to fill 'slots' that
> were like conceptual relations.  The other progenitor of the theory was the
> use of "scripts" that defined the kinds of things that might happen in an
> activity or that might interact with a kind of thing.  The problem with
> these two ideas was that they had to be used in stylized ways which tended
> to make them less adaptable for something like general language or a
> general interaction with the world.  So the modern concept of a schema is
> more like the psychological model which is not defined to be
> an effective program but rather more like a model that could be used in
> sketches of possible methods (like I am doing now). From that point, a
> programmer might imagine how these schemas might be applied in an AI
> program. Programmers who are interested in AI tend to think that the
> undefined fringes of the theory which are left to the imagination are
> actually part of theory.
>
> If you look at the Wikipedia example of the schema for an egg for example,
> you will find that there are no examples of how it might actually be
> used in a more elaborate story about an egg, only snippets of the
> characteristics of eggs that might be found in some of those stories.. This
> peculiarities of the representation reflect the history of the development
> of the theory and this can also give you some idea how my idea is different
> and how it is similar.  I see the conceptual networks as being much more
> extensive with more virtual relations since a feature of a central concept
> may be considered as a central concept themselves.
>
> The two problems that I have with the employment of this model are both
> due to the potential for complexity. However, I have come to the conclusion
> that scalability problems could be alleviated with intelligence.  (I
> realize that this is circular but only because AGI is still at such a low
> level of achievement.)  The second problem is the problem with the
> complexity of learning (given that available narrow AI methods are so
> limited in producing higher intelligence.)  So what I am thinking of doing
> is creating a test model where I would have a specialized window into the
> conceptual network and a special language to help the program examine the
> kinds of relations that I want it to consider.
>
> One of the most important differences between the schema model and my
> theory is that the conceptual network will be able to define how data is
> used.  It will be more active than is typical for an old data matching
> scheme.
> Jim Bromer
>
> On Sat, Oct 6, 2012 at 11:32 PM, Dimitry Volfson <[email protected]>wrote:
>
>>  Jim,
>>
>> Ok, so let's say that the prior conversation had been about a train
>> shaped clock that was bought on ebay and shipped by UPS. In this case the
>> clock interpretation and taking a look at UPS Quantum View(tm) (their
>> online tracking system) would be the more valid interpretation. Of course,
>> many jokes are based on this type of ambiguity.
>>
>> What is different as opposed to the old idea of schemas?
>> (e.g. http://sites.wiki.ubc.ca/etec510/Schema_Theory )
>>
>> Thanks,
>> Dimitry
>>
>>
>> On 10/6/2012 9:18 PM, Jim Bromer wrote:
>>
>> I don't have any details on how it would actually operate because it is a
>> fairly wild model.  I would have to control it using a somewhat precise
>> special language to direct it so I could test the basic ideas out without
>> having it be a full fledged AGI program.
>>
>> Let's say that the program was trying to interpret what a sentence meant.
>>
>> "What time is the train arriving?"
>>
>> Suppose that it had recognized the words but now was trying to make sense
>> of them.  (I am not going to write a program that has a vocabulary at the
>> start by the way.)  It would know that trains depart and arrive at train
>> stations if those concepts were already associated with the concept of a
>> train (through previous learning). If it knew that departures and arrivals
>> were made according to a schedule which was based on time and station, then
>> it should be able to interpret that the sentence was concerned with the
>> arrival time of a train at some station.  It might not be absolutely
>> certain of this interpretation.  But it would be able to make that
>> interpretation if those kinds of relations had been associated with the
>> concept of a train.  Other possible interpretations, like an odd one that
>> inferred that a train was a kind of time piece would not be confirmed by
>> the knowledge that it had about trains.  Suppose however, that it had
>> knowledge of a clock that was shaped like a model train for example. Then
>> there might be some confusion about what the sentence meant.  However, even
>> in this special case the program could learn that arrival times were a more
>> common issue when talking about trains than the much rarer case of a clock
>> that was made to look like a train.  So even though the program might be
>> exposed to a lot of odd cases, it could also have a way to designate more
>> common conceptual relations in its conceptual network.
>>
>> But this idea goes beyond associating facts with a particular concept.
>> Conceptual relations can also be used to shape how ideas work. In fact,
>> even this simple case demonstrates one way this can occur.
>>
>> Jim Bromer
>>
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to