On 12/8/06, Bob Mottram [EMAIL PROTECTED] wrote:
Hinton basically seems to be using the same kind of architecture as Edelman,
in that you have both bottom-up and top-down streams of information (or I
often just call this feed-forward and feed-back to keep the terminology more
consistent with
On 12/12/06, John Scanlon [EMAIL PROTECTED] wrote:
These rebukes to my statement that generating images is unnecessary are
right on target. I misinterpreted the quoted statement by Hinton: To
recognize shapes, first learn to generate images.
Therefore, I strongly recommend you read article
No GOFAI here.
On 12/12/06, John Scanlon wrote:
These rebukes to my statement that generating images is unnecessary are
right on target. I misinterpreted the quoted statement by Hinton: To
recognize shapes, first learn to generate images.
Therefore, I strongly recommend you
John Scanlon wrote:
[snip]
And bottom-up processing combined with top-down processing is also
perfectly reasonable and necessary. But can a full AGI be created based
on a simple (or not-so-simple), but mathematically-formalizable
neural-net algorithm? Intelligence seems to be beyond any
Whoops: there was an editing catastrophe in that last message. I
tshould have concluded with:
What I said in my paper is not that we need random, unbridled
imagination and creativity, but that we *do* need imagination and
creativity within a systematic framework. That is my approach.
Sorry, I meant that someone said that links to one's published papers should
be the criterion. Not necessarily mathematical proofs.
Richard Loosemore wrote:
John Scanlon wrote:
[snip]
And bottom-up processing combined with top-down processing is also
perfectly reasonable and necessary.
On 12/8/06, Bob Mottram [EMAIL PROTECTED] wrote:
However, as the years went by I became increasingly dissatisfied with this
kind of approach. I could get NN systems to work quite well on small toy
problems, but when trying to build larger more practical systems (for
example robots handling
Or two in the bush, as we say in America.
These rebukes to my statement that generating images is unnecessary are right
on target. I misinterpreted the quoted statement by Hinton: To recognize
shapes, first learn to generate images.
To recognize an incomplete image, it is important to be able
My opinion is that this list should accidently be used to just point to
interesting papers. If you disagree, please let me know.
Some very recent papers by Geoffrey Hinton have raised my hope on
academic progress towards neural networks research. The guy has always been
an ANN guru but I find his
This looks like the stuff I was doing 15 years ago. I started off being
very interested in neural networks, which were all the rage at the time. I
used backpropogation and other methods both supervised and unsupervised.
Like this guy I also tried unsupervised learning of classifiers followed by
10 matches
Mail list logo