General problem with AI is that it is too superficial, like mathematics. 
Mathematics is so powerful because it may abstract from details. We teach 
out children that if there are 10 objects in the box and you put one then 
there are 11 objects. No matter what the objects are. However, if the 10 
objects are mice and another object is the cat then the question what 
would happens next is much more complicated. I guess CYC would simply 
answer that the cat eats mice, but when I imagine this situation I see many 
other possible outcomes depending on aggressiveness of the mice, hunger, 
age and size of the cat, etc. However, human uses imagination, not 
predicate calculus. Can AI imagine situations? Aren't _mouse_ and _cat_ is 
just some abstract atoms for it like numbers in arithmetic?  

On Saturday, August 20, 2016 at 10:44:00 PM UTC+4, linas wrote:
>
> Indeed. I just read through the slides now, and am quite surprised. He 
> never actually identifies what Cyc did wrong, and thus is unable to make 
> sugestions about what might work.  All of the issues that he mentions in 
> the slides became quite apparent to me, after using OpenCyc for a month or 
> so -- it was clear that it was very fragile, very inconsistent.  I 
> concluded that:
>
> -- Its impossible for human beings to assemble a large set of 
> self-consistent statements that is bug-free.  There are simply too many 
> things to know; the system has to be able to automatically extract/convert 
> the needed relationships.
>
> -- Essentially all common-sense logic cannot be converted into 
> crisp-logic.  Here's a Zen koan from Firesign Theatre: "Ben Franklin was 
> the only President of the United States who was *never* a President of the 
> United States."  Any system that takes a shallow, superficial mapping of 
> the words in that sentence will fail to understand the humor.  Cyc seemed 
> to always try to be as superficial, as close to the surface as possible, 
> and never attempted to encode deep knowledge. Without the ability to use 
> probability and/or fuzzy reasoning, one can't get the joke.
>
> Two-thirds of the way through the presentation, Lenat does start talking 
> about pro-and-con reasoning, but somehow never quite takes the plunge into 
> probability: its as if he thought that simply taking a democratic vote of 
> pro and con statements is sufficient to determine truth -- but its not. 
>
> -- I do like a variation of the general concept of "micro-theories" that 
> Cyc uses, in the sense that there is a domain or context which is active, 
> in which all current thinking/speaking/deduction should happen in.   I also 
> like a related idea: the idea of "parallel universes" or "interpretations" 
> or "models(??)" or reality: during the course of a conversation (or during 
> the course of reasoning), one develops differing possible interpretations 
> of what is going on. These different interpretations will typically 
> contradict each-other, but will otherwise be reasonably self-consistent.  
> As additional evidence rolls in, some interpretations become untenable, and 
> must be discarded.  Other interpretations may simply become un-interesting, 
> simply because the conversations, the topic, has moved on, and the given 
> interpretation, although"true" and "self-consistent", does not offer any 
> insight into the current topic.  Attention-allocation must shift way from 
> such useless interpretations.
>
> There is this sense of "contexts" both in Markov logic and in kripke 
> semantics: the machinery of Markov logic, although imperfect, does provide 
> a much more sophisticated way of combining pro vs. con evidence to 
> determine a "most-likely" interpretation.  I keep mentioning these two 
> things, rather than PLN, not because I beleive that they have better 
> probability formulas, but rather, because they provide mechanisms to 
> concurrently maintain multiple contradictory interpretations at once, and 
> eventually eliminate most of them, leaving behind a few that are the "most 
> likely".
>
> I do believe that the above hint at how to avoid the mistakes of cyc -- 
> some form of probabilistic reasoning and evidence is needed, and some way 
> to automate learning and discovery of novelty is needed. 
>
> Whatever. Gotta run...
>
> --linas
>
> On Fri, Aug 19, 2016 at 7:47 AM, Andi <[email protected] <javascript:>> 
> wrote:
>
>> ty, linas for this resource!
>>
>> To me it looks like they do not have and never had an idea about how to 
>> do it right...
>>
>> Seems they hoped that something intelligent could emerge form a knowledge 
>> base just if it is big enough. IMHO this is completely worng. The 
>> possibility of emergence exists just if the space in which this emergence 
>> occures contains  the ability of auto organisation (not self organisation, 
>> because at this stage there is no self that could do organisation). 
>>
>>
>>
>>
>>
>>
>>
>> Am Dienstag, 16. August 2016 23:12:32 UTC+2 schrieb linas:
>>>
>>> So,
>>> ... the final analysis of what it did wrong is something else that it 
>>> did wrong?  Sigh.  
>>>
>>> --linas
>>>
>>>
>>>
>>> On Tue, Aug 16, 2016 at 3:38 PM, Ben Goertzel <[email protected]> wrote:
>>>
>>>> He's focusing on micro-level things they did wrong, but not
>>>> confronting the possibility that making a huge handcoded KB is just
>>>> the wrong thing to be doing...
>>>>
>>>> For instance he notes they have had to add 75 kinds of "in" to handle
>>>> different sorts of "in" relationship ... but doesn't question whether
>>>> it might be smarter to have the system instead learn various shades of
>>>> "in", which could allow it to learn 1000s of context-specific senses
>>>> not just 75 ...
>>>>
>>>> ben
>>>>
>>>>
>>>>
>>>> On Tue, Aug 16, 2016 at 1:30 PM, Linas Vepstas <[email protected]> 
>>>> wrote:
>>>> > The below is an old presentation, from 2009, but its the first I've 
>>>> seen of
>>>> > it.  Its long, I have not read it yet.   However, I suspect that it 
>>>> probably
>>>> > says good things (I hope; else that would be something else that CYC 
>>>> did
>>>> > wrong...)
>>>> >
>>>> > 
>>>> http://c4i.gmu.edu/oic09/papers/Mistakes%20Were%20Made%20OIC%202009%20keynote.pdf
>>>> >
>>>> > Everyone working on opencog theory should probably read it and 
>>>> memorize it
>>>> > and apply those lessons to the things we do.
>>>> >
>>>> > Thanks to Lukasz Stafiniak for pointing this out.
>>>> >
>>>> > --linas
>>>> >
>>>> > --
>>>> > You received this message because you are subscribed to the Google 
>>>> Groups
>>>> > "opencog" group.
>>>> > To unsubscribe from this group and stop receiving emails from it, 
>>>> send an
>>>> > email to [email protected].
>>>> > To post to this group, send email to [email protected].
>>>> > Visit this group at https://groups.google.com/group/opencog.
>>>> > To view this discussion on the web visit
>>>> > 
>>>> https://groups.google.com/d/msgid/opencog/CAHrUA369vqLVG7xEx5vVS%2BASqtaKMNSVBc7S3rdxdU8gGgEFOQ%40mail.gmail.com
>>>> .
>>>> > For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>>>
>>>> --
>>>> Ben Goertzel, PhD
>>>> http://goertzel.org
>>>>
>>>> Super-benevolent super-intelligence is the thought the Global Brain is
>>>> currently struggling to form...
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "opencog" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/opencog/CACYTDBfej%2Bn9u6%3DSjVcZ5JOUjHo%3DG86EVRMiqu9PNP0SRXyTyg%40mail.gmail.com
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/552d8de7-7fc3-4db9-9755-60e745414add%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to