Because such algos AFAIK only work on v. similar images/objects with limited 
noise. 

If objects are reasonably similar in form, such algos can effectively average 
them – or perform similar operations. And you can *see* how these algos  work – 
i.e. what the similarities of objects presented, are.

But these objects/chairs are not at all similar in form, and you’re not in any 
way explaining what their similarities/commonalities are, or how the algos will 
find them - just praying that such similarities exist.

There’s a book by De Bono called Practical Thinking which is relevant here – as 
to vague thinking. He presented a class with a black cylinder on a desk, wh. 
after a few mins. fell over. He asked them to explain what had happened.

People, he pointed out, came up with many degrees of explanation -  from very 
vague “ something (or “a mechanism”) inside made it fall over”  all the way to 
fully detailed diagrams of what the mechanism was and how it made the cylinder 
fall over.  Your (and most AGI-ers) thinking is basically more towards the 
vague “mechanism inside”/”algo” end (with one qualification – see below).

You have to come up with a definite idea as to how the cylinder falls over –  
in this case what the visually similar elements are – and/or by what operations 
initially dissimilar elements can be shown to be similar. (Todor has made a 
vague stab at the latter – but hasn’t visually demonstrated it in any way).

But you are basically avoiding the problem – just as all AGI-ers are basically 
avoiding any direct confrontation with the unsolved problem[s] of AGI – of wh. 
this is one manifestation.

Here comes the qualification – AGI-ers do go into great detail about their 
architectures, logics  etc  - wh. to them sounds technically impressive – but 
they never show how these architectures or logics apply/ are relevant to the 
actual problem. Similarly some kids in the class came up with very precise 
descriptions of some mechanism or motor inside the cylinder – but didn’t 
explain how it made the cylinder fall over.  







From: Piaget Modeler 
Sent: Sunday, November 04, 2012 10:36 PM
To: AGI 
Subject: RE: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]

Here I go again, back into the vortex... 

If the task is recognizing chairs, why wouldn't a simple constructive induction 
algorithm work.  Run it on the first N images 
and test on the remainder?   For each test image also present the category 
"Chair" to the process.

Why wouldn't this successfully classify the chairs or chair portions in the 
images you presented? 





--------------------------------------------------------------------------------
From: [email protected]
To: [email protected]
Subject: Re: [agi] The Fundamental Misunderstanding in AGI [was Superficiality]
Date: Sun, 4 Nov 2012 22:06:56 +0000


Well, Todor, these are pictures of chairs and *you* have little problem 
recognising them –  (agreed, the odd one may cause confusion – but that is a 
fundamental part of the  business of object recognition).

And you can recognise sensory images of such chairs from a distance, with v. 
limited “3d/solid” information (it sounds like you’re talking mainly about 
viewing objects as “solids” close up (in the flesh).  We detect a lot of “3d” 
info, you see,. from 2d pictures.

If you are suggesting that such object recognition processes in humans and 
animals are based on “solid” experience of comparable objects, I would agree.

But then you will have to give some indication of how the methods you outline 
apply to robots interacting with solid objects  – I wasn’t aware that you 
were/are doing that. Are you?

Ultimately you will have much the same problems dealing with solid objects as 
with their pictorial images. They too are as formally diverse as the images. 
Then certainly you/a robot can interact physically with the objects and 
ascertain if you can sit on them. But there will still be problems of 
classification – for example, separating stools/walls/boxes and other sittable 
objects from chairs. And more or less each solid chair in the pictured range 
still represents an extraordinary **transformation** of the  last,

You’re not really advancing our, i.e. the general, analysis of the problem, by 
implying it all can be easily solved. – by geometric operations. You have 
provided no evidence that they apply to those images, and by extension to their 
objects – and can explain the uniformities underlying that vast range of 
transformations.

I contend as you know, that it is only by comparing them with the multiform 
transformations of irregular, more or less fluid objects ranging from blobs and 
waterdrops all the way to rocks and wood chunks, that we can understand the 
design transformations of chairs -  and not by comparing them with the 
geometric, uniform transformations of geometric objects.

I would suggest it is far better for you to recognise the difficulties of the 
problem – which you have effectively done by pointing out the enormous visual 
ambiguities of the chair images – with wh. I broadly agree.

Then, I suggest, you have more to offer – it’s useful for example to bring in, 
as you have done, how the recognition of objects depends on how we physically 
interact with those objects – whether in this instance we can sit on them.  
(This exchange with you & Aaron has been valuable for me in underlining how 
probably essential that dimension is for object recognition).

But unless I missed it, your proposals don’t call for simulated interaction of 
the robot’s/viewer’s body with the objects viewed – such simulation is 
certainly necessary for us to recognise those chairs- and there is massive 
evidence for it. (Is anybody in AGI or robotics attempting such simulation?)

.
      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to