Vladimir Nesov wrote:
On Sat, Apr 26, 2008 at 12:52 AM, a <[EMAIL PROTECTED]> wrote:
My approach of visual reasoning involves some form of searching for similar
images. It associates images using spreading activation techniques to
disambiguate vision and to speed up image matching. Connections between
nodes strengthen as they are simultaneously activated while preserving their
context sensitivity. It is a bottom-up emergent approach that learns the
basic visual features first so it can selectively concentrate on
higher-level features, such as letters or words, while avoiding unneeded
parts. It is linked to a motivation system that concentrates on the
important features of an image and then implicitly memorizing them.
My method constantly searches for high-level features of images
context-sensitively. Due to the context-sensitive nature of intelligence,
image matching can be speeded up thousands of times by just searching a
subset of images depending on its context.
Spatial reasoning is the process of using explicit spatial manipulation
techniques. If the nodes are simultaneously activated, it can be further
optimized by connection strengthening without any modal-specific methods,
similar to intuition.
Do you have explicit images, or are these processes run from just
pixels-as-features combined in features of higher and higher levels?
Some low-level features such as shape, line detection, edge detection
colors, spatial location and motion are hard coded. Everything else is
up to the neural network. Covariations are automatically "condensed"
into a single image-like objects from connection strengthening.
For example, it can first learn letters of an alphabet. These letters
are then "condensed" into discrete objects so it can learn words. The
words are then "condensed". This is the emergent approach.
Does this spreading activation run "instantaneously"?
Some activations run instantly and some other activations are delayed
and primed so it can behave like disambiguation. (according to the
connection strength and interconnections...)
Can this process
find spatiotemporal patterns?
It finds patterns by using implicit comparisons between images memorized
implicitly. The motivation engine directs that.
Is there a top-down influence?
It happens on the motivation engine. It is an emergent process.
Motivation is just prediction with high-level concepts...
Could you explain what context-sensitive search means in your case?
What limits what?
It is spreading activation with image matching.
Is there a difference between how you handle visual perception and
other processes in your system (that account for reasoning, action,
other modalities, etc.)?
My model isn't fully developed. But the only difference is the
low-level vision specific algorithms such as edge detection, image
matching...
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com