When I personally look at these chairs, I don't recognize them as such (even though I know what they're supposed to be) until two things have happened: (1) I imagine how the object must be shaped in 3D based on the silhouette it casts and (2) I figure out where it is I'm supposed to sit. Counting from the top left, the one on row 2, column 2, and the one on row 3, column 5, are the ones that are most difficult to recognize as chairs, because I don't see where I'm supposed to sit right off. Even on a second glance, I have to think about it for a second, though this time interval drops each time I look at them.
Here's what I think is happening, broken down even further: * I am primed to think of chairs by the conversation that led me to look at the image. * I look at the image and reconstruct the shape of the object(s), based on assumptions of perspective and other visual cues hardwired into my visual centers. (This probably takes some simulation, as does the recognition of a place to sit) * I recognize that the shape of the object is well-connected and that the object is standing off the ground, so it must be self-supporting. (I look at what supports what based on apparent connectivity of the parts, and transfer the property of "being supported" starting from the bottom, which must be resting on something in my default frame of reference, from each part to its neighbors transitively. This may be overkill, however, since clearly the object isn't collapsing, so it must be supported somehow. Common sense wins over heavy computation.) * I recognize a bowlish or L shape (which each of the aforementioned chairs does not obviously conform to) where I could sit nicely, provided the object is at the right scale and is strong enough to support not only itself but me too. (This is facilitated by the fact that I'm already primed to look for chairs by the discussion. Some of these silhouettes I would not have recognized as chairs had they been presented alone and the context been different. For example, row 4 column 3 looks more like a stylized satellite dish than a chair, and the one below it could be mistaken for a floppy, oddly shaped Lone Ranger-style mask. Row 2 column 5 looks like a disembodied pair of lips. I could go on...) * I recognize that the object appears to be an artifact, based on general characteristics of artifactual objects I've observed in the past, such as diverse, seemingly unrelated parts and lots of straight or smoothly curved edges, combined with the presence of a clear use case (sitting). * I fail to identify another clear use case other than sitting. * I put it all together and see that the object seems to be intentionally designed (since it's an artifact) for someone to sit on it (since it has a place to sit and no other clear use), and therefore it conforms to my concept of a chair. * I validate this by the fact that we are talking about chairs, and that the image appears to consist of a repeated theme. As for the oddballs I mentioned, even though I don't recognize them as chairs right off, because I can see clearly that there's a repeated theme of rows and columns of chairs and I have other contextual information leading me to think these are all chairs, I gaze at them, actively searching for a potential sitting place, until I recognize one and feel satisfied. This search takes less time each time I do it, because I'm exercising and strengthening my memory/recall, which speeds the process up. On Tue, Oct 23, 2012 at 6:31 PM, Mike Tintner <[email protected]>wrote: > Aaron: No, you can't recognize those using images. Yes, you can > recognize them with other means > > Sounds mystical. What does this element - “the ability to support you > etc” physically consist of – and in what aspect of these chairs is it > evident? And with what means can you recognize it that are not > sensory/imagistic? > > In my next post I will physically analyse parts of these chairs – why > don’t you do similar? > > P.S. One can realistically frame a concept like you’re referring to – > let’s crudely call it “bum-supportability”. But the problems of analysing > that concept are much the same, only more complicated, as those of > analysing chairs. You’re in danger of getting into an infinite regress. > > *From:* Aaron Hosford <[email protected]> > *Sent:* Wednesday, October 24, 2012 12:03 AM > *To:* AGI <[email protected]> > *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not > Good Enough > > Sorry, gmail didn't like a key combination. As I was saying: > > Considering I just told you what the COMMON ELEMENTS and COMMON > RELATIONSHIPS OF THOSE ELEMENTS were, namely "not the shape, but the > combination of capability to support a behind and the potential inclination > of a person to take advantage of that capability (or intention of the > creator to provide such an artifact)", I'm going to concluded you either > weren't paying attention or didn't understand what I was saying. > > The common elements are: > 1. the ability to support you while sitting > 2. a person's intention for it to be used in that way > > No, you can't recognize those using images. Yes, you can recognize them > with other means. Once you've built something that works in these terms > (how an object is used) instead of merely how an object is shaped, it's > easy to apply those same terms to other classes of objects. > > On Tue, Oct 23, 2012 at 5:59 PM, Aaron Hosford <[email protected]>wrote: > >> Considering I just told you what the COMMON ELEMENTS and COMMON >> RELATIONSHIPS OF THOSE ELEMENTS were, namely "not the shape, but the >> combination of capability to support a behind and the potential inclination >> of a person to take advantage of that capability (or intention of the >> creator to provide such an artifact)", I'm going to concluded you either >> weren't paying attention or didn't understand what I was saying. >> >> The common elements are: >> 1. the ability to support you while sitting >> >> >> >> On Tue, Oct 23, 2012 at 1:04 PM, Mike Tintner >> <[email protected]>wrote: >> >>> Aaron, >>> >>> I have just sent out a post to PM wh. applies equally to you. >>> >>> This is waffle. >>> >>> You have to identify - >>> >>> what are the COMMON ELEMENTS - and COMMON RELATIONSHIPS OF THOSE >>> ELEMENTS – that will enable you or your semantic net to identify these >>> different figures as belonging to the same class of “chair” and not >>> “collages of wood” or “piles of assorted forms” or “computer desk” or >>> “collections of tools”? >>> >>> ARE there any common elements? >>> >>> You haven’t identified any >>> >>> You have to provide a direct clue as to how you are going to solve this >>> problem – the problem of AGI – and not just waffle. >>> >>> >>> >>> *From:* [email protected] >>> *Sent:* Tuesday, October 23, 2012 6:34 PM >>> *To:* AGI <[email protected]> >>> *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not >>> Good Enough >>> >>> The thing which typifies the category "chair" is not the shape, but the >>> combination of capability to support a behind and the potential inclination >>> of a person to take advantage of that capability (or intention of the >>> creator to provide such an artifact). These are things that are easy to >>> represent in semantic nets, and difficult to represent as rules about shape. >>> <http://shape.if/> >>> If I have a representation of an object as a semantic net describing its >>> parts and their physical relationships to each other, I can write a >>> straight forward algorithm to analyze the transitive "supports" and "is >>> connected to" relations in that description to determine whether the spot I >>> intend to sit is supported. I can also determine whether or not my behind, >>> when placed there, will itself be supported, or whether I'll slide off or >>> topple over. >>> >>> The network generating algorithm can be designed to provide the >>> information needed to perform this simulation (simulation being the reason >>> you say images are necessary in the first place). Once the simulation has >>> been performed the first time, the node representing the chair as a whole >>> object can be labeled with a summary of the results, acting as a cache for >>> relevant information so that the expensive operation of full physical >>> simulation can be avoided next time the information is needed. It is this >>> caching ability that gives hierarchical semantic nets their leg up over >>> other ways of representing the problem. >>> >>> >>> >>> -- Sent from my Palm Pre >>> >>> ------------------------------ >>> On Oct 23, 2012 11:30 AM, Mike Tintner <[email protected]> >>> wrote: >>> >>> PM & Aaron, >>> >>> You do realise that whatever semantic net system you use must apply to >>> not just one chair, but chair after chair – image after image? >>> >>> Bearing that in mind, explain the elements of your semantic net which >>> you will use to analyse these fairly simple figures as **chairs**:: >>> >>> >>> http://image.shutterstock.com/display_pic_with_logo/95781/95781,1218564477,2/stock-vector-modern-chair-vector-16059484.jpg >>> >>> Let’s label these chairs 1-25 (going L to R from the top down, row >>> after row) >>> >>> Start with just 1. and 2. top left and explain how your net will >>> recognize 2 as another example of 1. >>> >>> How IOW do you define a “chair” in terms of simple abstract forms? >>> >>> Then we can apply your system, successively, to 3. 4. etc. >>> >>> This is the problem that has defeated all AGI-ers and all psychologists >>> and philosophers so far. >>> >>> But Aaron (and PM?) has a semantic net solution to it - if you can >>> solve jungle scenes, this should be a piece of cake. >>> >>> I am saying, Aaron, you do not understand this problem – the problem of >>> visual object recognition/conceptualisation//applicability of semantic nets. >>> >>> You are saying you do – and it’s me who is confused. Show me. >>> >>> >>> >>> >>> >>> *From:* Piaget Modeler <[email protected]> >>> *Sent:* Tuesday, October 23, 2012 4:41 PM >>> *To:* AGI <[email protected]> >>> *Subject:* RE: [agi] Re: Superficiality Produces Misunderstanding - Not >>> Good Enough >>> >>> Mike, >>> >>> When you type "Chair" what should happen is the AGI's model should >>> activate the chair concept >>> first at a perceptual level to form the pixels into the words, then at a >>> linguistic level to form letters >>> into a word, then at a conceptual level, then at a simulation level >>> where images of chair instances >>> are evoked. >>> >>> This is just simple activation. Semantic networks tied into perception >>> and simulation would achieve >>> the necessary effect you seek. Transformations on these >>> perception-simulation-semantic networks >>> is what much of Piaget's work was about. >>> >>> ~PM. >>> >>> ------------------------------ >>> From: [email protected] >>> To: [email protected] >>> Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not >>> Good Enough >>> Date: Tue, 23 Oct 2012 15:09:30 +0100 >>> >>> CHAIR >>> >>> ... >>> >>> It should be able to handle any transformation of the concept, as in >>> >>> DRAW ME (or POINT TO/RECOGNIZE) A CHAIR IN TWO PIECES –.. >>> >>> ..SQUASHED >>> ..IN PIECES >>> -HALF VISIBLE >>> ..WITH AN ARM MISSING >>> ...WITH NO SEAT >>> ..IN POLKA DOTS >>> ...WITH RED STRIPES >>> >>> Concepts are designed for a world of everchanging, everevolving >>> multiform objects (and actions). Semantic networks have zero creativity or >>> adaptability – are applicable only to a uniform set of objects, (basically >>> a database) - and also, crucially, have zero ability to physically >>> recognize or interact with the relevant objects. I’ve been into it at >>> length recently. You’re the one not paying attention. >>> >>> The suggestion that networks or similar can handle concepts is >>> completely absurd. >>> >>> This is yet another form of the central problem of AGI, which you >>> clearly do not understand – and I’m not trying to be abusive – I’ve been >>> realising this again recently – people here are culturally punchdrunk with >>> concepts like *concept* and *creativity*, and just don’t understand them in >>> terms of AGI. >>> >>> *From:* Jim Bromer <[email protected]> >>> *Sent:* Tuesday, October 23, 2012 2:04 PM >>> *To:* AGI <[email protected]> >>> *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not >>> Good Enough >>> >>> Mike Tintner <[email protected]> wrote: >>> AI doesn’t handle concepts. >>> >>> Give me one example to prove that AI doesn't handle concepts. >>> Jim Bromer >>> >>> >>> >>> On Tue, Oct 23, 2012 at 4:24 AM, Mike Tintner >>> <[email protected]>wrote: >>> >>> Jim: Mike refuses to try to understand what I am saying because he >>> would have to give up his sense of a superior point of view in order to >>> understand it >>> >>> Concepts have nothing to do with semantic networks. >>> AI doesn’t handle concepts. >>> That is the challenge for AGI. >>> The form of concepts is graphics. >>> The referents of concepts are infinite realms.. >>> >>> What are you saying that is relevant to this, or that can challenge this >>> – from any evidence? >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com/> >>> >> >> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
