Well, I'm not really clear what you're getting at, mainly because when
talking about intelligence & thinking, all the terms we have to  use are so
versatile & loosely defined that to narrow down what's being communicated
to a sufficiently narrow set of interpretations, we have to say so much
that the key point becomes a needle in a haystack of contextual
information. I'm sure what you're saying here makes perfect sense to you,
but the words you're using aren't sufficiently grounded (or are grounded
differently for you than for me) that I don't follow.

I get the impression that you're saying (both here & in your previous
emails on Algorithmic Synthesis) that claiming two things are associated
isn't enough -- that the *kind* of association is important too. I agree
with you here. It's not enough to say, these are the parts and they go
together; how things connect must be considered to have productive thoughts
about them. This is directly analogous to the treatment of sentences as
bags of words: It's not enough to just look at the set of words to
determine the sentence's meaning; the way they connect to each other
matters. This is where I'm starting from in my system's design.

#1: Figure out how the human mind represents meaning.
#2: Figure out how to work with meaning to produce intelligent thought.

#2 cannot proceed until #1 is effectively implemented. Roger Schank has
provided quite a bit of inspiration to me, based on how he represents
meaning as semantic links connecting basic concepts together. From the
natural language perspective, it is relatively easy to see how this can be
implemented. I'm not alone in having successfully built a parser that
extracts a semantic network from a sentence which represents that
sentence's meaning with a fair degree of accuracy.

>From the perceptual perspective, it is also fairly easy to see how semantic
networks can be used to represent information. The visual field can be
broken into chunks or fields, each representing an object or a part of an
object. The objects are semantically connected to each other according to
the spatial or behavioral interactions they are participating in, and the
parts of objects are semantically linked to the objects and other parts
according to their arrangement. Nodes representing objects and parts
generated at a particular time can then be interconnected across multiple
time frames, resulting in a narrative description of the field of vision as
a sequence of events unfolds. Other senses can be integrated directly with
vision in the same manner.

Higher levels of abstraction can be generated by looking at patterns in
objects (just as objects are generated by looking at patterns of parts) and
adding additional nodes which serve to group together the lower level nodes
into patterns based on link types. Memory stores only these higher-level
nodes (parts, objects, & upward), not the lower levels which served in
their construction, and memory fades from the lowest levels upward, causing
us to lose detail but not gist.

Language (or rather the semantic nets which represent meaning) can then be
treated as predicates which match the upper levels of the perceptual
network, acquiring a non-Boolean or fuzzy truth value based on how well
they match perceptual information retrieved from memory. Thinking is
implemented at this level, as well. Thinking processes serve to generate
truthful predicates based both on direct observation of higher-level
perceptual subnets, and indirect reasoning based on observed patterns in
these perceptual subnets. Reasoning can reach as far down the hierarchy of
nodes as was stored in memory, but starts from the top-most level and does
not reach down to these lower levels except when higher-level abstractions
indicate that additional or finer-grained detail is needed. (This is how we
avoid the combinatorial bottleneck.) Predicates generated by observation or
reasoning can be directly read off and converted to natural language using
the same mechanisms as the semantic parser, but in reverse. (I've got much
of this mechanism working, too.)

I have yet to start work on the perceptual systems, but the semantic
representation of meanings/predicates is rolling along nicely. Perception
is going to take a lot more work, because there's a lot more data to
process, but I'm watching the research as it unfolds, and I see a lot being
done in the direction of object detection. Even if we create a perceptual
system that isn't as detailed in representation as human perception (i.e.
it represents objects and their interactions, but not their parts or lower
level abstractions), it should be possible to start work on a reasoning
system that handles higher-level abstractions and is able to communicate
its thoughts verbally or in text. This is the key point at which artificial
general intelligence gains traction as a technology worthy of financial
investment.


On Sat, Oct 13, 2012 at 9:21 PM, Jim Bromer <[email protected]> wrote:

> Well I just remembered why people have been so distracted by the analysis
> of superficial data.  Because it is easy.  Because it is easy for an
> automated program to analyze the superficial features of the input media
> and how the data environment of the medium is affected by the program's
> output but it is hard to figure out how the program would analyze hidden
> meaning.  But, most of the people in this group talk as if their
> ideas would be powerful enough to discover underlying meaning or underlying
> relations in the data environment.  So then what started as a first
> response to a problem description simply became the dogma.  (Yes that is
> really what happened.  Does anyone disagree? (No?!.))
>
>
> So while the rehashing of the first step may have seemed like it was
> an important primitive to explain to the inexperts, as it became the
> reigning focus of all such conventions of presentation it became the dogma
> of the genre.  Because people somehow found a rationalization to avoid
> taking the next step (to explain how deeper relations between ideas,
> concepts or operations in the IO data environment could be integrated and
> discerned) it became a blocking dogma.  In order to join the club, so to
> speak, you had to start by avoiding the next question.
>
> You often feel that you have already thought about an idea because you
> have examined a high-level concept which might be a categorizing principle
> of the idea.  For instance I was interested in 'associations' so when I
> encountered the word 'correlation' I simply felt that I had already
> considered the concept as a kind of association.  A correlation can be
> considered as a type of association so it seemed like I had already had
> handled that relation.  However, it just is not the same thing.  A
> correlation may be a kind of association but it is bound with another
> association as well, the concepts that defines the nature of the
> correlation.  So a correlation is not -just- an association.
>
> You have to take it to the next level and it has to start in your mind.
>
> Jim Bromer
>
>
>
>
> On Sat, Oct 13, 2012 at 8:39 PM, Jim Bromer <[email protected]> wrote:
>
>> I think that misunderstandings can occur when one person presents an idea
>> which possesses some features which resemble features of another idea that
>> a listener has already considered. If the resemblance is somewhat
>> superficial, especially if the superficial resemblances lie *at a
>> shallow underlying level, *a person who is listening to the idea may
>> feel certain that he was totally familiar with the idea even though he
>> might not really get what the speaker was saying. The listener may casually
>> miscategorize the presented idea by thinking that it was the same as the
>> similar idea he had already considered. Good ideas are often unoriginal or
>> unsurprising and this vague familiarity can strangely have a
>> non-intuitive effect to further a misunderstanding.  The reason that this
>> can occur is that ideas sometimes need to be emphasized or 'formalized' in
>> some way in order for them to be fully appreciated.
>>
>> I for one would like to be able to understand why people who should be
>> interested in something I have said aren't. The answer to this question has
>> always been somewhat elusive.  A primary characteristic that can produce
>> this kind of misunderstanding is superficiality in the listener.  (Of
>> course the new idea may be poorly presented and we all make a variety of
>> mistakes, but I am often confronted with the experience where I have
>> repeatedly presented a commonsense idea and I can't find anyone who acts
>> like they understand what I am talking about).
>>
>> But is there anyway you can verify (at least for yourself) that someone
>> who should be reacting intelligently to what you are saying is actually
>> reacting at a too-superficial level?  I have found that there is a way in
>> this group because we are constantly talking about artificial means of
>> creating "personality" traits.  If someone repeatedly emphasizes
>> superficiality of association as a presumption for the basis of
>> intelligence then there is a chance that he might unitentionally be
>> describing a method that commonly underlies his own thought processes.
>>
>> For example, I have described a process of synthesis where a new idea is
>> formed from the association of two pre-existing ideas based on a reason.
>> The reasons can be superficial, like a superficial co-occurrence (of time
>> or position) or a superficial similarity, but then I also emphasized that
>> ideas may be related by complimentary conceptual roles.  Furthermore, I
>> have emphasized the importance of conceptual structure which is a term I
>> use to stress that there may be a greater complexity to putting ideas
>> together than just relying on one superficial feature.
>>
>> So now, if after expressing this and pointing out that the purpose of
>> combining ideas is to create some semantic or operational structure, I see
>> someone restating the insight that co-occurrence and similarity are the
>> basis of correlation and association I will have some substantial evidence
>> that my ideas were not appreciated by that person because he tends to
>> be over reliant on superficial methods of thought.  Co-occurrence,
>> similarity, simple association and analogy are all examples of relations
>> between ideas that are typically shallow. The superficiality may not be at
>> the surface level, but it is usually not going to be that deep.  The
>> declaration of these relations are all ok but I feel that if the presenter
>> is going to explain how intelligence works then he needs to take it to a
>> deeper level.
>>
>> Of course, misunderstanding can also occur when a phrase is taken to
>> refer to a superficial aspect of thought even though the speaker intended
>> it to refer to deeper relations as well.  But I think the declaration that
>> the basis of correlation is co occurrence, similarity and associativity has
>> just been too over-used to still be considered sufficient as a presentation
>> of the basis of thinking. Thinking gets a little deeper than that.
>>
>> Jim Bromer
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to