Aaron,  just two comments:
"Higher levels of abstraction can be generated by looking at patterns in 
objects (just as objects are generated by looking at patterns of parts) and 
adding additional nodes which serve to group together the lower level nodes 
into patterns based on link types. Memory stores only these higher-level nodes 
(parts, objects, & upward), not the lower levels which served in their 
construction, and memory fades from the lowest levels upward, causing us to 
lose detail but not gist."
1. What makes you think memory stores only the higher-level nodes? I contend 
that the lower level nodes are also stored and can be reactivated (or 
"recalled") as needed.
2. I concur with your theory.  How long until the implementation? 
~PM.

------------------------------------------------------------------------------------------------------------------------------------------------Confidential
 - This message is meant solely for the intended recipient. Please do not copy 
or forward this message without the consent of the sender. If you have received 
this message in error, please delete the message and notify the sender.
Date: Tue, 16 Oct 2012 14:34:37 -0500
Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good 
Enough
From: [email protected]
To: [email protected]

Well, I'm not really clear what you're getting at, mainly because when talking 
about intelligence & thinking, all the terms we have to  use are so versatile & 
loosely defined that to narrow down what's being communicated to a sufficiently 
narrow set of interpretations, we have to say so much that the key point 
becomes a needle in a haystack of contextual information. I'm sure what you're 
saying here makes perfect sense to you, but the words you're using aren't 
sufficiently grounded (or are grounded differently for you than for me) that I 
don't follow.

I get the impression that you're saying (both here & in your previous emails on 
Algorithmic Synthesis) that claiming two things are associated isn't enough -- 
that the *kind* of association is important too. I agree with you here. It's 
not enough to say, these are the parts and they go together; how things connect 
must be considered to have productive thoughts about them. This is directly 
analogous to the treatment of sentences as bags of words: It's not enough to 
just look at the set of words to determine the sentence's meaning; the way they 
connect to each other matters. This is where I'm starting from in my system's 
design.

#1: Figure out how the human mind represents meaning.#2: Figure out how to work 
with meaning to produce intelligent thought.
#2 cannot proceed until #1 is effectively implemented. Roger Schank has 
provided quite a bit of inspiration to me, based on how he represents meaning 
as semantic links connecting basic concepts together. From the natural language 
perspective, it is relatively easy to see how this can be implemented. I'm not 
alone in having successfully built a parser that extracts a semantic network 
from a sentence which represents that sentence's meaning with a fair degree of 
accuracy.

>From the perceptual perspective, it is also fairly easy to see how semantic 
>networks can be used to represent information. The visual field can be broken 
>into chunks or fields, each representing an object or a part of an object. The 
>objects are semantically connected to each other according to the spatial or 
>behavioral interactions they are participating in, and the parts of objects 
>are semantically linked to the objects and other parts according to their 
>arrangement. Nodes representing objects and parts generated at a particular 
>time can then be interconnected across multiple time frames, resulting in a 
>narrative description of the field of vision as a sequence of events unfolds. 
>Other senses can be integrated directly with vision in the same manner.

Higher levels of abstraction can be generated by looking at patterns in objects 
(just as objects are generated by looking at patterns of parts) and adding 
additional nodes which serve to group together the lower level nodes into 
patterns based on link types. Memory stores only these higher-level nodes 
(parts, objects, & upward), not the lower levels which served in their 
construction, and memory fades from the lowest levels upward, causing us to 
lose detail but not gist.

Language (or rather the semantic nets which represent meaning) can then be 
treated as predicates which match the upper levels of the perceptual network, 
acquiring a non-Boolean or fuzzy truth value based on how well they match 
perceptual information retrieved from memory. Thinking is implemented at this 
level, as well. Thinking processes serve to generate truthful predicates based 
both on direct observation of higher-level perceptual subnets, and indirect 
reasoning based on observed patterns in these perceptual subnets. Reasoning can 
reach as far down the hierarchy of nodes as was stored in memory, but starts 
from the top-most level and does not reach down to these lower levels except 
when higher-level abstractions indicate that additional or finer-grained detail 
is needed. (This is how we avoid the combinatorial bottleneck.) Predicates 
generated by observation or reasoning can be directly read off and converted to 
natural language using the same mechanisms as the semantic parser, but in 
reverse. (I've got much of this mechanism working, too.)

I have yet to start work on the perceptual systems, but the semantic 
representation of meanings/predicates is rolling along nicely. Perception is 
going to take a lot more work, because there's a lot more data to process, but 
I'm watching the research as it unfolds, and I see a lot being done in the 
direction of object detection. Even if we create a perceptual system that isn't 
as detailed in representation as human perception (i.e. it represents objects 
and their interactions, but not their parts or lower level abstractions), it 
should be possible to start work on a reasoning system that handles 
higher-level abstractions and is able to communicate its thoughts verbally or 
in text. This is the key point at which artificial general intelligence gains 
traction as a technology worthy of financial investment.



                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to