On 30/03/2008, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
> Although I symphathize with some of Hawkin's general ideas about unsupervised
>learning, his current HTM framework is unimpressive in comparison with
>state-of-the-art techniques such as Hinton's RBM's, LeCun's
convolutional nets >and the promising low-entropy coding variants.
>
> But it should be quite clear that such methods could eventually be very handy
> for AGI. For example, many of you would agree that a reliable, computationally
> affordable solution to Vision is a crucial factor for AGI: much of the world's
> information, even on the internet, is encoded in audiovisual information.
> Extracting (sub)symbolic semantics from these sources would open a world of
> learning data to symbolic systems.
>
> An audiovisual perception layer generates semantic interpretation on the
> (sub)symbolic level. How could a symbolic engine ever reason about the real
> world without access to such information?

So a deafblind person couldn't reason about the real world? Put ear
muffs and a blind fold on, see what you can figure out about the world
around you. Less certainly, but then you could figure out more about
the world if you had magnetic sense like pidgeons.

Intelligence is not about the modalities of the data you get, it is
about the what you do with the data you do get.

All of the data on the web is encoded in electronic form, it is only
because of our comfort with incoming photons and phonons that it is
translated to video and sound. This fascination with A/V is useful,
but does not help us figure out the core issues that are holding us up
whilst trying to create AGI.

  Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to