Sergio,

> All our sensory organs receive patchworks,  not patterns. 


I used the term 'pattern' in a broader sense.
For me, the input for a retinal array is a  (vector) pattern.

Many of unsupervised learning algorithms (including neural 
ones) aim (& do) the compression of patterns/vectors.

Can't we say those algorithms (or systems implementing
them) generate 'patterns' in your sense? 

On 2012/07/21, at 23:41, Sergio Pissanetzky wrote:

> Naoya,
> 
> before a neural system learns patterns, it has to make them. All our sensory
> organs receive patchworks,  not patterns. Hofstadter said it: "The central
> problem of AI is how to start from 100 million dots of light on your retina
> and end with 'Hi, mom' in 0.5 sec?" The brain generates a pattern, and keeps
> it in store for further use. The pattern is an invariant representation. But
> how does it generate that pattern in the first place? The assumption that a
> computer program will do what a physical system does is not correct. You
> have to simulate it first. And to do that, you need to know how the
> invariant representations come into existence. From what you say, you don't.
> 
> 
> I have devoted a considerable amount of work to precisely that problem,
> generating invariant trepresentations. It is very recent, so it is not
> widely yet.
> 
> Sergio
> 
> 
> -----Original Message-----
> From: ARAKAWA Naoya [mailto:[email protected]] 
> Sent: Friday, July 20, 2012 9:10 PM
> To: AGI
> Subject: Re: [agi] Re: How the Brain Works -- new H+ magazine article, by me
> 
> On 2012/07/21, at 4:59, Mike Tintner wrote:
> 
>> Sergio: I noticed that Jeff Hawkins in On Intelligence writes about 
>> "invariant representations," which are hierarchies, but never explains 
>> how they come into existence. I am just a little confused.
> 
>> I wonder whether you have an outstanding point there. Everyone
>> *talks* about "invariant representations". Does anyone anywhere have 
>> any AI-worthy explanation of their nature/origin whatsoever?
>> 
>> (Of course, invariant representations overlap with concepts. There are 
>> psych/phil. explanatory theories of concepts, but that's why I put in 
>> "AI-worthy". I suspect they are all v. vague).
> 
> I interpreted "invariant representations" in the writing of Hawkins as
> learned patterns.
> When a neural system learns some pattern, say that of a line segment, it
> recognizes line segments regardless of their orientation or length (hence
> 'invariant").
> "Invariant representations" in a neural network would be distributed so
> that one cannot point out saying, for example, *this* is the representation
> of a line segment...
> 
> * The Gibsonian invariance might be a different notion while he may
>  have made the term popular among cognitive scientists (?). 
> --
> Naoya ARAKAWA
> 



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to