Re: [agi] Groundless (AND fuzzy) reasoning - in one
On 8/9/2008 12:43 AM, Brad Paulsen wrote: Mike Tintner wrote: That illusion is partly the price of using language, which fragments into pieces what is actually a continuous common sense, integrated response to the world. Excellent observation. I've said it many times before: language is analog human experience digitized. And every time I do, people look at me funny. I dunno about that. When I walk into my dining room, I don't see a continuous experience, I see a table and chairs and plates, etc. I clump the world into objects that have discrete boundaries. Isn't that digitization in the sense you mean? I think of language more as serializing something that's parallel internally, and saving communications bandwidth by supplying enough information to uniquely identify an already known concept rather than fully describing it -- part of which is the use of symbols. As a side note: There's some evidence that dolphins communicate by making sounds that imitate what their sonar would return. It's somewhat equivalent to me being able to wave my hands and make an image appear in the air. Thus there's no need for symbols, because they can reproduce the sensory input of the original object. If it had been easier to do the same thing in our sensory environment (vision rather than sonar), we might never have evolved symbolic language and all that led to. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
Brad Paulsen wrote: Mike Tintner wrote: That illusion is partly the price of using language, which fragments into pieces what is actually a continuous common sense, integrated response to the world. Mike, Excellent observation. I've said it many times before: language is analog human experience digitized. And every time I do, people look at me funny. There is an analogy to musical synthesizers that may be instructive. Early synthesizers attempted to recreate analog instruments using mathematics. The result sounded sort of like the real thing, but any human could tell it was a synthesized sound fairly easily. Then, people started recording instruments and sampling their sounds digitally. Bingo. I've been a musician all my life, classically trained and am both a published songwriter and professional guitarist. With the latest digital synthesizers I have in my nome studio, it's very difficult for me to answer the question, Is it real or is it digitized. Even plucked string instruments, like the guitar, really sound like the analog original using the newer synths. Language is how we record analog human experience in digitized format. We need to concentrate on discovering how that works so we can use it as input to produce intelligence that sounds just like the real thing on output. I believe Matt Mahoney has been working on developing insights in this area with his work in information theory and compression. Once we crack the code, we will be able to build symbolized AGIs that will, in many cases, exceed the capabilities of the original because the underlying representation will be so much easier to observe and manipulate. Cheers, Brad However language is not standardized in the same manner that musical synthesizers are standardized. So while what you are saying may well be true within any one mind, when these same thoughts are shared via language, the message immediately becomes much fuzzier (to be resharpened when received, but with slightly different centroids of meaning). As a result of this being repeated multiple times language, though essentially digital, has much of the fuzziness of an analog signal. Books and other mass media tend to diminish this effect, however. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
That goes back to my previous point on the amount and type of information our brain is able to extract from a visual input. It would be truly difficult I say, even using advanced types of neural nets, to give a set of examples of chairs, such as the ones Mike linked to, and let the machine recognize any subsequent object as chiar, given *only* the visual stimuli. That's why I think it's amazing what kind of info we extract from visual input, in fact it is anything but visual. Suppose for example that you wanted to take pictures of the concept 'force'. When we see a force we can recognize one, and the concept 'force' is very clearly defined.. i.e. something is either a force or is not, there is no fuzzyiness in the concept. But teaching a machine to recognize it visually.. well that's a different story. A practical example: before I learned rock-climbing I saw not only rocks, but the space around me in a different way. Now just by looking at a room or space I see all sorts of hooks, places to hang on that I would never have thought of before.. I learned to 'read' the image differently, that is, to extract different types of information from before. In the same manner, my mother who is a Sommelier, by smelling wine can extract all sorts of information about its provenience, the way it was made, the kind of ingredients, that I could never think of (thus now I am shifting from visual input to odor input). To me it just smells like wine :) My point is that our brain *combines* visual or other stimuli with a bank of *non-*visual data in order to extract relevant information. That's why talking about AGI from a purely visual perspective (or purely verbal) takes it out of context from the way we experience the world. But you could prove me wrong by building a machine that using *only* visual input can recognize forces :) --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
Valentina:My point is that our brain *combines* visual or other stimuli with a bank of *non-*visual data in order to extract relevant information. This is a v. important point. There is no such thing as *single sense cognition*. Cognition is actually *common sense*/*multisensory*. Michael Tye is v. big on this. You cannot just look at/see something. You are simultaneously hearing/ smelling/ kinaesthetically aware of its distance from you and relation to you, etc.. You cannot separate one sense from the rest - even though, intellectually, we have the illusion that we can. That illusion is partly the price of using language, which fragments into pieces what is actually a continuous common sense, integrated response to the world. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
But all these concepts should have something in common with chairs, enough for us to determine its usage and function and be able to decide it *is* a chair. Now if it was a all of a broken, toy, paper chair with a spike in the middle it would be understandable if it was not recognizable as a chair. In the image he provided, one of the chairs was a yellow soft thing that had 20 or so arm things coming out of it I for one, would not have recognized that as a chair in a room unless I had seen someone use it. The AGI should be able to model many chair as descriptions, and it should be able to know what a chairs usage is for, given those bits, and interaction in a real environment, it could figure out what the odd looking thing in the corner is by watching others use it for example. ___ James Ratcliff - http://falazar.com Looking for something... --- On Tue, 8/5/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: From: YKY (Yan King Yin) [EMAIL PROTECTED] Subject: Re: [agi] Groundless (AND fuzzy) reasoning - in one To: agi@v2.listbox.com Date: Tuesday, August 5, 2008, 1:35 PM On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote: There is one common feature to all chairs: They are for the purpose of sitting on. I think it is important that this is *not* a visual characteristic. It is possible to recognize chairs that cannot be sat on -- for example, a broken chair, a miniature chair, a toy chair, a paper chair, a chair with a long sharp spike on the seat, etc. =) YKY --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote: There is one common feature to all chairs: They are for the purpose of sitting on. I think it is important that this is *not* a visual characteristic. It is possible to recognize chairs that cannot be sat on -- for example, a broken chair, a miniature chair, a toy chair, a paper chair, a chair with a long sharp spike on the seat, etc. =) YKY --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
Abram:There is one common feature to all chairs: They are for the purpose of sitting on. I think it is important that this is *not* a visual characteristic. There are several objections that you could raise, but I think that all of them will follow from the fuzziness of language, not the fuzziness of the actual concepts. Your bottom is for the purpose of sitting on. How will your set of verbal definitions be able to tell the difference between a bottom and a chair? How will it know that if Abram sits on a table, it isn't also a chair? (And how will it know that, actually, it *could* be a chair?) And if John hit Jack with a chair , will your set of verbal definitions not exclude this as truthful, if it has nothing about a chair being for the purpose of hitting people? Not only can a chair, like any other concept of an object , take an infinity of forms, but it can be used for an infinity of functions and purposes. Here's S. Kauffman on the purposes of screwdrivers [or chairs] - Do we think we can prestate all possible tasks in all possible environments and problem situations such that we can construct a bounded frame for screwdrivers? Do we think we could write an algorithm, an effective procedure, to generate a possibly infinite list of all possible uses ... some of which do not yet exist? I don't think we could get started. Out of interest, is there one single domain, one area however small and bounded, like, say, understanding sentences about boxes or geometrical objects, where ungrounded, purely symbolic reasoning has ever worked/ got started at general intelligence level - i.e. been able to understand all the permutations of a limited set of words? Just one. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
Re: [agi] Groundless (AND fuzzy) reasoning - in one
Mike, On Tue, Aug 5, 2008 at 3:16 PM, Mike Tintner [EMAIL PROTECTED] wrote: Abram:There is one common feature to all chairs: They are for the purpose of sitting on. I think it is important that this is *not* a visual characteristic. There are several objections that you could raise, but I think that all of them will follow from the fuzziness of language, not the fuzziness of the actual concepts. Your bottom is for the purpose of sitting on. How will your set of verbal definitions be able to tell the difference between a bottom and a chair? I'm not arguing against grounded AI, just fuzzy concepts. So, I can say that you are invoking a verbal ambiguity when you say sitting on in the above sentence, not a conceptual one. How will it know that if Abram sits on a table, it isn't also a chair? (And how will it know that, actually, it *could* be a chair?) I was very careful in my wording. I said they are for the purpose of sitting on rather than they are sat on. Objects that get sat on are not necessarily for that purpose, and objects that are made for the purpose are not necessarily ever used. And if John hit Jack with a chair , will your set of verbal definitions not exclude this as truthful, if it has nothing about a chair being for the purpose of hitting people? This causes no problem; a thing can have additional purposes and still be a chair, and more relevantly, something can be used in a way that does not match its purpose. Not only can a chair, like any other concept of an object , take an infinity of forms, but it can be used for an infinity of functions and purposes. Here's S. Kauffman on the purposes of screwdrivers [or chairs] - Do we think we can prestate all possible tasks in all possible environments and problem situations such that we can construct a bounded frame for screwdrivers? Do we think we could write an algorithm, an effective procedure, to generate a possibly infinite list of all possible uses ... some of which do not yet exist? I don't think we could get started. But the definition doesn't need to do this. It just needs to set up a working criteria for something being a screwdriver. There is no need to list all possible additional purposes a screwdriver could have. Out of interest, is there one single domain, one area however small and bounded, like, say, understanding sentences about boxes or geometrical objects, where ungrounded, purely symbolic reasoning has ever worked/ got started at general intelligence level - i.e. been able to understand all the permutations of a limited set of words? Just one. Again, I am not arguing for ungrounded concepts, I'm jsut arguing against fuzzy ones. YKY, A broken chair is still for the *purpose* of sitting on, it just doesn't work. (I was careful with my definition!) Miniature/toy/paper chairs are not real chairs; you simply use the same word (because it is a good way of getting the idea across, and besides, those things are *supposed* to look like chairs). A chair with a spike in the seat is just cruel. :) OK, ok, so the spiked chair is actually a tough one... I could say that the chair had a purpose until someone stuck the spike in it, but you would say maybe it was constructed with the spike. I could then claim that we are calling it a chair just because we're used to calling such objects chairs, but that is a cop-out. I admit that I actually think of it as a chair. Then again, I can think of a barrel as a chair... so maybe it is better to call that one a potential chair, like a barrel; we could use it if we could get the spike out... But the fact that I am rambling on like this is totally defeating my point :). So, I concede the point, and propose the following solution: What I am actually doing is pretending that there is a real, physical property of chairness. When I say pretending, I mean that this variable is actually a part of my probabilistic model of the world, but if pressed I would admit that it didn't exist (but wouldn't actually remove it from my model, even then). Yet, I still want to hold on to the idea that chair can be factually defined, too. I am convinced that the fuzziness idea (as it exists in AI) is a result of the attempt to get simple representations of complex things. The fact that the concept of chair is multifaceted and takes a while to properly explain is not just a result of trying to fit hard logic to a soft concept, it is the result of the actual complexity of the concept! Or, in cases such as chairness, simply applying fuzziness would obscure details about the way the concept is ill-defined. A concept could be fuzzy because it is physically probabilistic, or it could be fuzzy because we are uncertain of its actual properties, or it could be fuzzy because actual continuous variables are involved. -Abram On Tue, Aug 5, 2008 at 2:35 PM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote: There is one common feature to all