On 12/12/06, John Scanlon <[EMAIL PROTECTED]> wrote:

 These rebukes to my statement that generating images is unnecessary are
right on target.  I misinterpreted the quoted statement by Hinton: "To
recognize shapes, first learn to generate images."

Therefore, I strongly recommend you read article ;-)


 ... But can a full AGI be created based on a simple (or not-so-simple),
but mathematically-formalizable neural-net algorithm?  Intelligence seems to
be beyond any level of complexity that can be described and proved
mathematically in academic papers (which somebody here referenced recently
as a criterion for inclusion in this mailing list).


Yes, and no. From the AI agent perspective, the agent has (crudely) two
tasks: (a) continuous adaption to stimuli, and itself, to maintain
a consistent and relaxed (in Boltzmann terms) low-energy model of the
world. (b) goal/motivational system, (c) planning systems, (d) optimizations
like reflex systems etc.

No person on this board would disagree when I say that (a) is highly
non-trivial, and when task (a) generates a good model, tasks such as (b) and
(c) become much easier to achieve. Therefore the study of task (a) learning
good models deserves more respect from this community.

And non-traditional ANN's like Geoffrey Hinton's deliver hell of a punch on
the (a) task. And I have yet to see a compelling GOFAI solution to this
task.


More like what the hotshot 3-D video-game programmers do without any
mathematical training.  But that's just an analogy.  AI is obviously a lot
more difficult, by orders of magnitude, than video-game programming, and yet
it seems to me that the breakthroughs will come from
fly-by-the-seat-of-your-pants imagination and creativity than formal
mathematical analysis of ANNs.


I'm happy to be just as broad-minded as others on this list. Of course,
"fly-by-the-seat-of-your-pants" approaches often incorperate a great level
of creativity and fresh ideas, which are necesarry ingredients
for succesfull AGI in the near future. But, there is always the reality
check.

That said, on hand hand, some of the formal mathematics and
statistics surrounding ANN's are just intellectual wrapper to get published.
But beyond these formula's lay idea's that are just as wild as ideas on this
board.
On the other hand, there is good reason to use mathematics and statistics.
From our perspective they are just *tools*, well-studied and
well-known tools that give you better certainty about stuff like
performance, convergence, ways to use probability distributions,
optimization, etc. If there is decennia of great progress in the
understanding of tools such as mathematics and statistics, how stupid would
we be to not make use of it? This is what the (mainly non-gofai) AI
community has discovered in the 80's.

I'm affraid much people confuse mathematics and statistics with rigidness
and non-creativeness.

Durk

 John





Kingma, D.P. wrote:

The ability to generate meaningful images it not a goal, but a very good
indicator that your system formed a consistent model of its world. And
that's, in a nutshell, what cognition is about: creating and updating a
consistent internal model, using observation of the world. That's what
Novamente is doing by MindAgents, that's what Stank Franklin is doing with
codelets iirc. And, needless to say, there is strong evidence that the brain
uses a strong unsupervised learning 'algorithm': look at the small size of
the genome, the uniformity of an undeveloped brain and the strong ability to
move function among parts of the cortex after brain injury. That why I think
that every successful artificial consistent-internal-model learning method
should be studied, and treated with respect, by everyone interested in AGI
development.

Some time ago on this board, people agreed that functionality should have
priority over optimization. I agree. In the Netherlands we have a saying:
better to have one bird in the hand, than ten in the air.

12/10/06 Joel Pitt wrote:
I think creativity does have alot of importance to any useful AGI.

If you can't visualize (whether spatially or in an abstract sense) the
result of your actions, how do you know whether you should procede
with them?

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton


12/10, 2006, Bob Mottram wrote:

Perception isn't just a question of passively recording information.  It's
a process which requires active interpretation of often incomplete or
ambiguous data.  When looking at an image the process of interpretation
involves a feedback stream originating from the retina ("bottom up") and
also a feedforward ("top down") stream originating from memories, concepts,
prior expectations, conscious states, etc.  It's the synchronisation of
these two streams which results in a percept.


On 10/12/06, John Scanlon  wrote:
>
>  Recognizing shapes by an AGI and being able to talk to the AGI about
> them is the first step -- a very necessary step.  But I don't understand why
> an AI system would have to be able to generate images.  That's not important
> at all.
>
> Kingma wrote:
>
> Some very recent papers by Geoffrey Hinton have raised my hope on
> academic progress towards neural networks research. The guy has always been
> an ANN guru but I find his latest work especially interesting.
>
> Especially
> "To recognize shapes, first learn to generate images." (Technical Report
> UTML TR 2006-004.)
> An interesting detail is that the generative model described in this
> paper beats SVM's in classification of image data. Furthermore, check the
> network's ability of confabulation.
>
> There is also a recent paper in Science:
> "Reducing the dimensionality of data with neural networks"
>
> ------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to