Here I go again, back into the vortex... 
If the task is recognizing chairs, why wouldn't a simple constructive induction 
algorithm work.  Run it on the first N imagesand test on the remainder?   For 
each test image also present the category "Chair" to the process.
Why wouldn't this successfully classify the chairs or chair portions in the 
images you presented? 



From: [email protected]
To: [email protected]
Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]
Date: Sun, 4 Nov 2012 22:06:56 +0000





Well, Todor, these are pictures of chairs and *you* have little problem 
recognising them –  (agreed, the odd one may cause confusion – but that is 
a fundamental part of the  business of object recognition).
 
And you can recognise sensory images of such chairs from a distance, with 
v. limited “3d/solid” information (it sounds like you’re talking mainly about 
viewing objects as “solids” close up (in the flesh).  We detect a lot of 
“3d” info, you see,. from 2d pictures.
 
If you are suggesting that such object recognition processes in humans and 
animals are based on “solid” experience of comparable objects, I would 
agree.
 
But then you will have to give some indication of how the methods you 
outline apply to robots interacting with solid objects  – I wasn’t aware 
that you were/are doing that. Are you?
 
Ultimately you will have much the same problems dealing with solid objects 
as with their pictorial images. They too are as formally diverse as the images. 
Then certainly you/a robot can interact physically with the objects and 
ascertain if you can sit on them. But there will still be problems of 
classification – for example, separating stools/walls/boxes and other sittable 
objects from chairs. And more or less each solid chair in the pictured range 
still represents an extraordinary **transformation** of the  last,
 
You’re not really advancing our, i.e. the general, analysis of the problem, 
by implying it all can be easily solved. – by geometric operations. You have 
provided no evidence that they apply to those images, and by extension to their 
objects – and can explain the uniformities underlying that vast range of 
transformations.
 
I contend as you know, that it is only by comparing them with the multiform 
transformations of irregular, more or less fluid objects ranging from blobs and 
waterdrops all the way to rocks and wood chunks, that we can understand the 
design transformations of chairs -  and not by comparing them with the 
geometric, uniform transformations of geometric objects.
 
I would suggest it is far better for you to recognise the difficulties of 
the problem – which you have effectively done by pointing out the enormous 
visual ambiguities of the chair images – with wh. I broadly agree.
 
Then, I suggest, you have more to offer – it’s useful for example to bring 
in, as you have done, how the recognition of objects depends on how we 
physically interact with those objects – whether in this instance we can sit on 
them.  (This exchange with you & Aaron has been valuable for me in 
underlining how probably essential that dimension is for object 
recognition).
 
But unless I missed it, your proposals don’t call for simulated interaction 
of the robot’s/viewer’s body with the objects viewed – such simulation is 
certainly necessary for us to recognise those chairs- and there is massive 
evidence for it. (Is anybody in AGI or robotics attempting such 
simulation?)
 
.                                         


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to