Hi Jan, On Mon, Aug 22, 2016 at 12:18 AM, Jan Matusiewicz <[email protected] > wrote:
> General problem with AI is that it is too superficial, like mathematics. > Mathematics is so powerful because it may abstract from details. We teach > out children that if there are 10 objects in the box and you put one then > there are 11 objects. No matter what the objects are. However, if the 10 > objects are mice and another object is the cat then the question what > would happens next is much more complicated. I guess CYC would simply > answer that the cat eats mice, but when I imagine this situation I see many > other possible outcomes depending on aggressiveness of the mice, hunger, > age and size of the cat, etc. However, human uses imagination, not > predicate calculus. Can AI imagine situations? Aren't _mouse_ and _cat_ is > just some abstract atoms for it like numbers in arithmetic? > Nice example, nice questions. Taking this at face value, I'll say, "yes, AI can imagine situations", or at least, that is the goal. Imagining a situation requires having a reasonably accurate model of a world with young and old cats, and knowing that young cats behave very differently than old ones. How can an AI "imagine a situation"? Let me answer your second question first: Yes, _cat_ is just some atom. Just a single ConceptNode, with the string "_cat_" in it. The "common-sense", world-knowledge about cats is encoded in the various EvaluationLinks and PredicateNodes that encode things like "cats have claws", "cats kill small animals" "cats chase moving things" and "old fat cats are slow". The knowledge is encoded in this big messy graph, of which (ConceptNode "_cat_") is just one vertex. The more accurate the world model, the bigger the tangle of inter-related facts. Then "imagining a situation" would be an exploration of this tangle of inter-connected factoids. The OpenCog AI is given a graph that encodes the statement "a box contains ten mice and one cat". It then has to explore the network of inter-related facts, and determine if there are plausible connections between them -- e.g. "if the cat in the box is alive, and if cats kill small animals, and if mice are small animals and if the mice in the box are alive, then maybe the cat will kill a mouse" -- you can see this as a chain of inference. Brain-storming, or imagining, is "just" a matter of chasing all these threads, ideas, and see where they lead to. You can also see how this goes wrong: a lack of imagination is the same as failing to explore the full network of factoids. Perhaps it takes too much CPU time to explore it all; perhaps the exploration algorithm is flawed. Perhaps some important facts about the world are missing. But you also raise an interesting meta-problem: thought depends on context. If the context is "math exam", then the only right answer is "eleven", and all that thinking about cats is a distraction. If the context is "we're drinking at a bar and you just posed a clever trick question", then being able to imagine cats is important. Its still chaining though -- "if there are 11 objects in the box, and a nuclear bomb is an object, and nuclear bombs are radioactive, and cats are biological and bio organisms suffer from radiation poisoning, and ..." whatever -- a perhaps dizzy and useless chain of thoughts that leads nowhere, but its still an imaginative process that has to take place. It would be fair to say that CYC was performing such chains of inference -- and so, indeed, CYC was "imagining a situation". The problem with CYC was almost surely that it contained too many contradictory facts, that it had no way of managing contradiction, because it didn't do probability (the slides hint at "pro and con reasoning", which is a primitive kind of probabilistic reasoning). CYC probably also had problems with inference control -- there is a combinatoric explosion of possibilities to explore during chaining. --linas > > > On Saturday, August 20, 2016 at 10:44:00 PM UTC+4, linas wrote: >> >> Indeed. I just read through the slides now, and am quite surprised. He >> never actually identifies what Cyc did wrong, and thus is unable to make >> sugestions about what might work. All of the issues that he mentions in >> the slides became quite apparent to me, after using OpenCyc for a month or >> so -- it was clear that it was very fragile, very inconsistent. I >> concluded that: >> >> -- Its impossible for human beings to assemble a large set of >> self-consistent statements that is bug-free. There are simply too many >> things to know; the system has to be able to automatically extract/convert >> the needed relationships. >> >> -- Essentially all common-sense logic cannot be converted into >> crisp-logic. Here's a Zen koan from Firesign Theatre: "Ben Franklin was >> the only President of the United States who was *never* a President of the >> United States." Any system that takes a shallow, superficial mapping of >> the words in that sentence will fail to understand the humor. Cyc seemed >> to always try to be as superficial, as close to the surface as possible, >> and never attempted to encode deep knowledge. Without the ability to use >> probability and/or fuzzy reasoning, one can't get the joke. >> >> Two-thirds of the way through the presentation, Lenat does start talking >> about pro-and-con reasoning, but somehow never quite takes the plunge into >> probability: its as if he thought that simply taking a democratic vote of >> pro and con statements is sufficient to determine truth -- but its not. >> >> -- I do like a variation of the general concept of "micro-theories" that >> Cyc uses, in the sense that there is a domain or context which is active, >> in which all current thinking/speaking/deduction should happen in. I also >> like a related idea: the idea of "parallel universes" or "interpretations" >> or "models(??)" or reality: during the course of a conversation (or during >> the course of reasoning), one develops differing possible interpretations >> of what is going on. These different interpretations will typically >> contradict each-other, but will otherwise be reasonably self-consistent. >> As additional evidence rolls in, some interpretations become untenable, and >> must be discarded. Other interpretations may simply become un-interesting, >> simply because the conversations, the topic, has moved on, and the given >> interpretation, although"true" and "self-consistent", does not offer any >> insight into the current topic. Attention-allocation must shift way from >> such useless interpretations. >> >> There is this sense of "contexts" both in Markov logic and in kripke >> semantics: the machinery of Markov logic, although imperfect, does provide >> a much more sophisticated way of combining pro vs. con evidence to >> determine a "most-likely" interpretation. I keep mentioning these two >> things, rather than PLN, not because I beleive that they have better >> probability formulas, but rather, because they provide mechanisms to >> concurrently maintain multiple contradictory interpretations at once, and >> eventually eliminate most of them, leaving behind a few that are the "most >> likely". >> >> I do believe that the above hint at how to avoid the mistakes of cyc -- >> some form of probabilistic reasoning and evidence is needed, and some way >> to automate learning and discovery of novelty is needed. >> >> Whatever. Gotta run... >> >> --linas >> >> On Fri, Aug 19, 2016 at 7:47 AM, Andi <[email protected]> wrote: >> >>> ty, linas for this resource! >>> >>> To me it looks like they do not have and never had an idea about how to >>> do it right... >>> >>> Seems they hoped that something intelligent could emerge form a >>> knowledge base just if it is big enough. IMHO this is completely worng. The >>> possibility of emergence exists just if the space in which this emergence >>> occures contains the ability of auto organisation (not self organisation, >>> because at this stage there is no self that could do organisation). >>> >>> >>> >>> >>> >>> >>> >>> Am Dienstag, 16. August 2016 23:12:32 UTC+2 schrieb linas: >>>> >>>> So, >>>> ... the final analysis of what it did wrong is something else that it >>>> did wrong? Sigh. >>>> >>>> --linas >>>> >>>> >>>> >>>> On Tue, Aug 16, 2016 at 3:38 PM, Ben Goertzel <[email protected]> >>>> wrote: >>>> >>>>> He's focusing on micro-level things they did wrong, but not >>>>> confronting the possibility that making a huge handcoded KB is just >>>>> the wrong thing to be doing... >>>>> >>>>> For instance he notes they have had to add 75 kinds of "in" to handle >>>>> different sorts of "in" relationship ... but doesn't question whether >>>>> it might be smarter to have the system instead learn various shades of >>>>> "in", which could allow it to learn 1000s of context-specific senses >>>>> not just 75 ... >>>>> >>>>> ben >>>>> >>>>> >>>>> >>>>> On Tue, Aug 16, 2016 at 1:30 PM, Linas Vepstas <[email protected]> >>>>> wrote: >>>>> > The below is an old presentation, from 2009, but its the first I've >>>>> seen of >>>>> > it. Its long, I have not read it yet. However, I suspect that it >>>>> probably >>>>> > says good things (I hope; else that would be something else that CYC >>>>> did >>>>> > wrong...) >>>>> > >>>>> > http://c4i.gmu.edu/oic09/papers/Mistakes%20Were%20Made%20OIC >>>>> %202009%20keynote.pdf >>>>> > >>>>> > Everyone working on opencog theory should probably read it and >>>>> memorize it >>>>> > and apply those lessons to the things we do. >>>>> > >>>>> > Thanks to Lukasz Stafiniak for pointing this out. >>>>> > >>>>> > --linas >>>>> > >>>>> > -- >>>>> > You received this message because you are subscribed to the Google >>>>> Groups >>>>> > "opencog" group. >>>>> > To unsubscribe from this group and stop receiving emails from it, >>>>> send an >>>>> > email to [email protected]. >>>>> > To post to this group, send email to [email protected]. >>>>> > Visit this group at https://groups.google.com/group/opencog. >>>>> > To view this discussion on the web visit >>>>> > https://groups.google.com/d/msgid/opencog/CAHrUA369vqLVG7xEx >>>>> 5vVS%2BASqtaKMNSVBc7S3rdxdU8gGgEFOQ%40mail.gmail.com. >>>>> > For more options, visit https://groups.google.com/d/optout. >>>>> >>>>> >>>>> >>>>> -- >>>>> Ben Goertzel, PhD >>>>> http://goertzel.org >>>>> >>>>> Super-benevolent super-intelligence is the thought the Global Brain is >>>>> currently struggling to form... >>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "opencog" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> To post to this group, send email to [email protected]. >>>>> Visit this group at https://groups.google.com/group/opencog. >>>>> To view this discussion on the web visit >>>>> https://groups.google.com/d/msgid/opencog/CACYTDBfej%2Bn9u6% >>>>> 3DSjVcZ5JOUjHo%3DG86EVRMiqu9PNP0SRXyTyg%40mail.gmail.com. >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>> >>>> >> -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at https://groups.google.com/group/opencog. > To view this discussion on the web visit https://groups.google.com/d/ > msgid/opencog/552d8de7-7fc3-4db9-9755-60e745414add%40googlegroups.com > <https://groups.google.com/d/msgid/opencog/552d8de7-7fc3-4db9-9755-60e745414add%40googlegroups.com?utm_medium=email&utm_source=footer> > . > > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35U6ZfbO-8UwUZwfhERthaGcr34BWRa7uO8xah%3D%3D60MmQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
