In each case *some* customization of the generic processes will be needed in
order to make them achieve these tasks.  But I believe that the amount of
customization needed is much greater for the 1000x1000 case than for the
30x30 case.

Depends what you are trying to do with the information.

Obviously 30x30 is far less information than 1000x1000, this makes it easier to process exhaustively, but on the other hand, it's harder to do something interesting with that impoverished data stream. You may find that upstream of low-level visual processing, it's actually a greater challenge to do something of interest with a very low res information stream.

An upper level process would prefer to have a 1000x1000 representation pared down to 30x30 (trivial), have the same transformations applied that you would have done to the 30x30, but then be able to "zoom in" where deemed necessary, getting more detail.

In short, you could probably do more with less computing power if you started with a 1000x1000 input stream than with a 30x30, because you have the flexibility of examining either resolution.

This is why our brains are inundated with attentional mechanisms that operated within and across every sensory modality. Simple processing where sufficient, deeper processing where necessary gets you more bang for you buck.


-Brad





I suspect that:

* in the 30x30 case we can get away with some minor specializations and
customizations and then let the general cognition tools do the work, whereas

* in the 1000x1000 case we badly need a more complex hierarchical
architecture in which the lower levels are dealt with by very specialized
stuff, and the higher levels are more like the
mildly-specialized-general-cognition algorithms that are *all* you need in
the 30x30 case...

-- Ben


If a 30x30 pixel grid is what you mean by content-richness, then I do
intend
for Novamente to deal with *this* kind of content-richness in the fairly
near future (how near depending on the achievement of relevant funding,
blah
blah blah).  I believe that this level of richness doesn't require the
kind
of complex, specialized pre-filtering that human-eye-level richness
requires...

Urgh, the adjective applies to what goes on *inside*, not what comes in
from the outside. If you just create 30x30 PixelNodes and let the generic
processes do the rest, you don't have content-richness (or if you do, you
have to learn *all* of it). Its a question of what kind of build-in
support you have for stuff like the things I mentioned (resolving
ambiguity, object completion, invariance under transformations, temporal
patterns).. And yeah, I know novamente can recognize temoral patterns
;->...

Moshe


-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to