Oops, I'm afraid I linked the wrong, more theoretical (but also interesting)
paper. Can't find the right one anywhere, but I did find a lecture/video
about exactly the same research, which is quite amusing :)

http://www.cs.nyu.edu/~yann/talks/index.html<http://www.cs.nyu.edu/%7Eyann/talks/index.html>
Look under "2007-12-06: Learning a Deep Hierarchy of Sparse Invariant
Features"

Durk



> On Wed, Mar 19, 2008 at 11:41 PM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> > No problem ;)
> > One other autoencoder architecture you might find interesting is Yann
> > Lecun's "deep belief network":
> > http://yann.lecun.com/exdb/publis/pdf/ranzato-nips-07.pdf
> > (his most recent publication).
> >
> > Deep belief network's are basically stacked feedforward autoencoders,
> > learned with backprop, with a sparse coding mechanism on top. Yann Lecun's
> > networks are based on traditional feed-forward neural nets, and in general
> > much faster to learn then boltzmann machines.
> >
> > I agree with you that this idea of autoencoders / deep belief networks
> > could be interesting for AGI, since they provide a natural way of
> > automatically finding compact, usefull representations of otherwise very
> > obscure data such as vision or speech. In the above paper, some pretty
> > impressive results are published in the context of general vision. Currently
> > LeCun's architecture is the best (simplest) solution for general object
> > recognition...
> >
> > Durk
> >
> >
> > On Thu, Mar 6, 2008 at 5:13 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> >
> > > Durk,
> > >
> > > I am indebted to you for bringing this very interesting Hinton lecture
> > > to
> > > the attention of this list.
> > >
> > > It is highly relevant to AGI, since, if it is to be believed, it
> > > provides a
> > > general architecture for learning invariant hierarchical
> > > representations
> > > (which are currently in vogue--for good reason), from presumably any
> > > type of
> > > data.  It can perform both unsupervised and supervised learning.
> > >  Hinton
> > > claims this architecture scales well.  He does not mention how his
> > > system
> > > would learn temporal patterns, but presumably it could be expanded to
> > > do so,
> > > such as by the use of temporal buffers to store sequences of inputs
> > > over
> > > time. If it could learn temporal patterns it would seem to be able to
> > > generate behaviors as well as recognizing and generating patterns.
> > >
> > > Of course it would require considerably more to become a full AGI,
> > > such as
> > > motivational, reinforcement-learning-like, mental behavior, goal
> > > selecting,
> > > goal pursuing, and novel pattern formation features.  But it would
> > > seem to
> > > provide a system for automatically learning and generating a
> > > significant
> > > percent of the patterns and behaviors an AGI would need.
> > >
> > > I think the AGI community should be open to adopting such a
> > > potentially
> > > powerful idea from machine learning, if it is shown to be as powerful
> > > as
> > > Hinton says, because, if so, it would add credence to the possibility
> > > of AGI
> > > by making the task of building an AGI seem considerably less complex.
> > >
> > > Ed Porter
> > >
> > >                -----Original Message-----
> > >                From: Kingma, D.P. [mailto:[EMAIL PROTECTED]
> > >                Sent: Sunday, March 02, 2008 12:08 PM
> > >                To: [email protected]
> > >                Subject: [agi] interesting Google Tech Talk about
> > > Neural
> > > Nets
> > >
> > >                Gentlemen,
> > >                For guys interested in vision, neural nets and the
> > > like,
> > > there's a very interesting talk by Geoffrey Hinton about unsupervised
> > > learning of low-dimensional codes:
> > >                It's been on Youtube since December, but somehow it
> > > escaped
> > > my attention for some months.
> > >
> > >                http://www.youtube.com/watch?v=AyzOUbkUf3M
> > >
> > >                BTW, the back of Peter Norvig's head makes a guest
> > > appearance throughout most of the video ;)
> > >
> > >                As an academic I'm quite excited about this technique
> > > because it has the potential of solving non-trivial parts of problems
> > > in
> > > perception in a clean, practical, understandable way.
> > >
> > >                Greets from Utrecht, Netherlands,
> > >                Durk
> > >
> > > agi | Archives <http://www.listbox.com/member/archive/303/=now>
> > > <http://www.listbox.com/member/archive/rss/303/> | Modify
> > > <
> > > http://www.listbox.com/member/?&;
> > > >
> > > Your Subscription        <http://www.listbox.com>
> > >
> > >
> >
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to