First of all Mike, notice that you did not try to come up with an "example" that would "prove" that AI cannot handle concepts. The idea that you could do so was absurd - so what did you do? You avoided getting bogged down in an absurd demand. Now suppose I now made a stupid remark like: SEE I KNEW YOU COULDN'T COME UP WITH A SINGLE EFFIN EXAMPLE! BOY YOU ARE CLUELESS. REALLY DIM MIKE. Ok, I could say that you were unable to come up with an example, but it was an absurd challenge in the first place. You cannot come up with a single example which proved the universality that no AI program can handle a concept. The problem is that you weren't able to back up the assertion either.
However, what you did do was to make the attempt to define (that is significant Mike - You Did Try to DEFINE how an AI program should handle a concept. This was unusually reasonable for you. However, your definition pointed to precisely the sort of goal that we want to achieve. Remember what John Kennedy said. "We choose to go to the moon in this decade and do the other things not because it is easy but because it is hard." It would be like someone saying in 1963 that no one could possibly reach the moon because no machine can go to the moon. You are saying that no computer program is actually capable of AGI. Well, duh... All you are doing now is agreeing with us. It is about time that you started to understand what it is that we are talking about. As far as the situation where an AGI program might model some concepts.. Well they can. Quite a few AI programs would be able to recognize the parts of an object that had been broken as long as the background was kept simple and the breakage or disassembly was kept simple. I can't provide you with a link that can act as an example but I am pretty sure that there are programs that can do that. Look at the different problem of automated driving cars. They can "recognize" a pedestrian crossing the street. They need special devices but they are pretty good at the concept of a pedestrian crossing the path of the car. In fact a lot of devices can recognize when something is in their path. This is pretty basic stuff. Those machines cannot think about a concept but they can take that first step IF IT HAS BEEN DEFINED JUST AS YOU DEFINED SOME OF THE PROPERTIES OF A CONCEPT. Jim Bromer On Tue, Oct 23, 2012 at 10:09 AM, Mike Tintner <[email protected]>wrote: > CHAIR > > ... > > It should be able to handle any transformation of the concept, as in > > DRAW ME (or POINT TO/RECOGNIZE) A CHAIR IN TWO PIECES –.. > > ..SQUASHED > ..IN PIECES > -HALF VISIBLE > ..WITH AN ARM MISSING > ...WITH NO SEAT > ..IN POLKA DOTS > ...WITH RED STRIPES > > Concepts are designed for a world of everchanging, everevolving multiform > objects (and actions). Semantic networks have zero creativity or > adaptability – are applicable only to a uniform set of objects, (basically > a database) - and also, crucially, have zero ability to physically > recognize or interact with the relevant objects. I’ve been into it at > length recently. You’re the one not paying attention. > > The suggestion that networks or similar can handle concepts is completely > absurd. > > This is yet another form of the central problem of AGI, which you clearly > do not understand – and I’m not trying to be abusive – I’ve been realising > this again recently – people here are culturally punchdrunk with concepts > like *concept* and *creativity*, and just don’t understand them in terms of > AGI. > > *From:* Jim Bromer <[email protected]> > *Sent:* Tuesday, October 23, 2012 2:04 PM > *To:* AGI <[email protected]> > *Subject:* Re: [agi] Re: Superficiality Produces Misunderstanding - Not > Good Enough > > Mike Tintner <[email protected]> wrote: > AI doesn’t handle concepts. > > > Give me one example to prove that AI doesn't handle concepts. > Jim Bromer > > > > On Tue, Oct 23, 2012 at 4:24 AM, Mike Tintner <[email protected]>wrote: > >> Jim: Mike refuses to try to understand what I am saying because he >> would have to give up his sense of a superior point of view in order to >> understand it >> >> Concepts have nothing to do with semantic networks. >> AI doesn’t handle concepts. >> That is the challenge for AGI. >> The form of concepts is graphics. >> The referents of concepts are infinite realms.. >> >> What are you saying that is relevant to this, or that can challenge this >> – from any evidence? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
