Ben,

Comments below.

On Mon, Aug 9, 2010 at 12:00 PM, Ben Goertzel <b...@goertzel.org> wrote:

>
>
>> The human visual system doesn't evolve like that on the fly. This can be
>> proven by the fact that we all see the same visual illusions. We all exhibit
>> the same visual limitations in the same way. There is much evidence that the
>> system doesn't evolve accidentally. It has a limited set of rules it uses to
>> learn from perceptual data.
>>
>
>
> That is not a proof, of course.  It could be that given a general
> architecture, and inputs with certain statistical properties, the same
> internal structures inevitably self-organize
>
>
You're right, I should organize details and evidence that the human brain
has a lot of its processing algorithms built in.

Another example of this innate ability to process inputs the right way is
the fact that many language acquisition researchers believe that children
have a built-in hypothesis space that they use when learning language (see
generativism at http://en.wikipedia.org/wiki/Language_acquisition).

It is likely not enough to just give it all the data it needs and let it
guess till it fines a good answer. The hypothesis space is likely too large.




So I'm curious
>
> -- what are the specific pattern-recognition modules that you will put into
> your system, and how will you arrange them hierarchically?
>

Well, the first pattern-recognition modules are the ones for inferring scene
and object structures and properties from visual/lidar data. I can't really
be specific because

The next set of pattern-recognition modules would be for inferring
relationships such as object whole-to-part relationships and their other
behavioral relationships. Basically, algorithms for inferring a sparse or
dense models of objects. Again, it is quite hard to be specific about
algorithms. There is a lot of detailed analysis that I have yet to do for
each type of problem and how the whole is broken down into these types of
relationships. Again, as you can see, I think the problem can be broken down
into generic components that can be reasoned about.

As for hierarchical design... I haven't decided yet. It really depends on
the purpose of the hierarchy and its function. That's why in the paper I
stress function before design.





>
> -- how will you handle feedback connections (top-down) among the modules?
>


That's a very good question. I haven't decided yet really because I haven't
fully worked out all the pieces of the design and how they must interact to
solve problems. I'd need to analyze specific requirements and what problems
such feedback is required to solve.

I guess one example of feedback might be the interpretation of ambiguous
visual input, such as single images from a less than ideal camera and scene
setup. Such problems require feedback from knowledge. I see this as a
separate visual processing system from the visual learning system that I
mentioned in the paper. This is because the system I designed is for
learning from less ambiguous input. Once it has gained sufficient knowledge
this way, more ambiguous input would be possible to process and understand
with confidence.

So, clearly much still has to be worked out about the design. But, my
working assumption is that these things can be broken down analytically and
solved. The alternative is to just hope that a similar-to-the-brain model is
going to work. I just don't think we can reasonably hope that such a model
will work, be effective and be efficient. I think it is just too hard to
guess at the right structure that will solve the problems without actually
showing how it solves all the problems we want to apply it to.
*
I really think it is very important for the functional requirements to
create the design.* Regardless of the approach, we need to understand why
the solutions we create solve the problems we want to solve. And if we can't
show that they do solve them or how they solve them, then the odds are
against us that they will work. That's my opinion.

If one could show how deep learning models, for example, really do solve all
the problems we want to solve, then I would be willing to use them. I just
don't see it though. I doesn't seem that the solution was generated by the
problem. It seems more that the solution was generated based on its
similarity to the brain. I just can't accept the risk that such approaches
won't work.

Since I don't think reverse engineering the brain makes sense either. My
only alternative to those two approaches seems to be the one I'm taking.

Dave



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to