[agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Brad Paulsen
I believe the company mentioned in this article was referenced in an active thread here recently. They claim to have semantically enabled Wikipedia. Their stuff is supposed to have a vocabulary 10x that of the typical U.S. college graduate. Currently being licensed to software developers

Re: [agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Trent Waddington
On Sat, Sep 20, 2008 at 4:37 PM, Brad Paulsen [EMAIL PROTECTED] wrote: Oh, OK, so I added the stuff in the parentheses. Sue me. Hehe, indeed. Although I'm sure Powerset has some nice little relationship links between words, I'm a little skeptical about the claim to meaning. I don't mean that

Re: [agi] Free AI Courses at Stanford

2008-09-20 Thread Valentina Poletti
The lectures are pretty good in quality, compared with other major university on-line lectures (such as MIT and so forth) I followed a couple of them and definitely recommend. You learn almost as much as in a real course. On Thu, Sep 18, 2008 at 2:19 AM, Kingma, D.P. [EMAIL PROTECTED] wrote: Hi

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Jiri Jelinek
Matt, So, what formal language model can solve this problem? A FL that clearly separates basic semantic concepts like objects, attributes, time, space, actions, roles, relationships, etc + core subjective concepts e.g. want, need, feel, aware, believe, expect, unreal/fantasy. Humans have senses

Re: [agi] Where the Future of AGI Lies

2008-09-20 Thread BillK
On Fri, Sep 19, 2008 at 10:05 PM, Matt Mahoney wrote: From http://en.wikipedia.org/wiki/Yeltsin Boris Yeltsin studied at Pushkin High School in Berezniki in Perm Krai. He was fond of sports (in particular skiing, gymnastics, volleyball, track and field, boxing and wrestling) despite losing

Re: [agi] Free AI Courses at Stanford

2008-09-20 Thread Bob Mottram
2008/9/20 Valentina Poletti [EMAIL PROTECTED]: The lectures are pretty good in quality, compared with other major university on-line lectures (such as MIT and so forth) I followed a couple of them and definitely recommend. You learn almost as much as in a real course. The introduction to

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Russell Wallace
On Fri, Sep 19, 2008 at 11:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote: So perhaps someone can explain why we need formal knowledge representations to reason in AI. Because the biggest open sub problem right now is dealing with procedural, as opposed to merely declarative or reflexive,

[agi] NARS probability

2008-09-20 Thread Abram Demski
It has been mentioned several times on this list that NARS has no proper probabilistic interpretation. But, I think I have found one that works OK. Not perfectly. There are some differences, but the similarity is striking (at least to me). I imagine that what I have come up with is not too

Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-20 Thread Steve Richfield
Mike, On 9/19/08, Mike Tintner [EMAIL PROTECTED] wrote: Steve: Thanks for wringing my thoughts out. Can you twist a little tighter?! A v. loose practical analogy is mindmaps - it was obviously better for Buzan to develop a sub-discipline/technique 1st, and a program later. MAJOR

Re: [agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Bryan Bishop
On Saturday 20 September 2008, Trent Waddington wrote: Hehe, indeed.  Although I'm sure Powerset has some nice little relationship links between words, I'm a little skeptical about the claim to meaning.  I don't mean that in a philosophical not grounded sense.. I'm of the belief that you

Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-20 Thread Mike Tintner
Steve: If I were selling a technique like Buzan then I would agree. However, someone selling a tool to merge ALL techniques is in a different situation, with a knowledge engine to sell. The difference AFAICT is that Buzan had an *idea* - don't organize your thoughts about a subject in random

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
Abram, I think the best place to start, in exploring the relation between NARS and probablity theory, is with Definition 3.7 in the paper From Inheritance Relation to Non-Axiomatic Logichttp://www.cogsci.indiana.edu/pub/wang.inheritance_nal.ps [*International Journal of Approximate

[agi] Convergence08 future technology conference...

2008-09-20 Thread Ben Goertzel
Convergence08 http://www.convergence08.org. Join a historic convergence of leading long term organizations and thought leaders. Two days with people at the forefront of world-changing technologies that may reshape our career, body and mind – that challenge our perception of what can and should

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Matt Mahoney
--- On Fri, 9/19/08, Jan Klauck [EMAIL PROTECTED] wrote: Formal logic doesn't scale up very well in humans. That's why this kind of reasoning is so unpopular. Our capacities are that small and we connect to other human entities for a kind of distributed problem solving. Logic is just a tool

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Fri, 9/19/08, Jan Klauck [EMAIL PROTECTED] wrote: Formal logic doesn't scale up very well in humans. That's why this kind of reasoning is so unpopular. Our capacities are that small and we connect to other human

Re: [agi] NARS probability

2008-09-20 Thread Abram Demski
Ben, Thanks for the references. I do not have any particularly good reason for trying to do this, but it is a fun exercise and I find myself making the attempt every so often :). I haven't read the PLN book yet (though I downloaded a copy, thanks!), but at present I don't see why term

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
I haven't read the PLN book yet (though I downloaded a copy, thanks!), but at present I don't see why term probabilities are needed... unless inheritance relations A inh B are interpreted as conditional probabilities A given B. I am not interpreting them that way-- I am just treating

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
And the definition 3.7 that you mentioned *does* match up, perfectly, when the {w+, w} truth-value is interpreted as a way of representing the likelihood density function of the prob_inh. Easy! The challenge is section 4.4 in the paper you reference: syllogisms. The way evidence is spread

Re: [agi] Free AI Courses at Stanford

2008-09-20 Thread Daniel Allen
BTW, University of Washington has free grad computer science course videos, including a couple AI courses: http://www.cs.washington.edu/education/dl/course_index.html My personal favorite is the data mining course by Pedro Domingos: http://www.cs.washington.edu/education/courses/csep573/01sp/

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote: If formal reasoning were a solved problem in AI, then we would have theorem-provers that could prove deep, complex theorems unassisted. We don't. This indicates

Re: [agi] NARS probability

2008-09-20 Thread Abram Demski
Well, one question is whether you want to be able to do inference like A --B tv1 |- B --A tv2 Doing that without term probabilities is pretty hard... Not the way I set it up. A--B is not the conditional probability P(B|A), but it *is* a conditional probability, so the normal Bayesian

Re: [agi] NARS probability

2008-09-20 Thread Pei Wang
On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski [EMAIL PROTECTED] wrote: It has been mentioned several times on this list that NARS has no proper probabilistic interpretation. But, I think I have found one that works OK. Not perfectly. There are some differences, but the similarity is striking

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Matt Mahoney
-- Matt Mahoney, [EMAIL PROTECTED] --- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote: It seems a big stretch to me to call theorem-proving guidance a language modeling problem ... one may be able to make sense of this statement, but only by treating the concept of language VERY

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
Beside the problem you mentioned, there are other issues. Let me start at the basic ones: (1) In probability theory, an event E has a constant probability P(E) (which can be unknown). Given the assumption of insufficient knowledge and resources, in NARS P(A--B) would change over time, when

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Pei Wang
Matt, I really hope NARS can be simplified, but until you give me the details, such as how to calculate the truth value in your converse rule, I cannot see how you can do the same things with a simpler design. NARS has this conversion rule, which, with the deduction rule, can replace

Re: [agi] NARS probability

2008-09-20 Thread Abram Demski
Thanks for the critique. Replies follow... On Sat, Sep 20, 2008 at 8:20 PM, Pei Wang [EMAIL PROTECTED] wrote: On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski [EMAIL PROTECTED] wrote: [...] The key, therefore, is whether NARS can be FULLY treated as an application of probability theory, by

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
To pursue an overused metaphor, to me that's sort of like trying to understand flight by carefully studying the most effective high-jumpers. OK, you might learn something, but you're not getting at the crux of the problem... A more appropriate metaphor is that text compression is the

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
(2) For the same reason, in NARS a statement might get different probability attached, when derived from different evidence. Probability theory does not have a general rule to handle inconsistency within a probability distribution. The same statement holds for PLN, right? PLN handles

Re: [agi] NARS probability

2008-09-20 Thread Pei Wang
On Sat, Sep 20, 2008 at 9:09 PM, Abram Demski [EMAIL PROTECTED] wrote: (1) In probability theory, an event E has a constant probability P(E) (which can be unknown). Given the assumption of insufficient knowledge and resources, in NARS P(A--B) would change over time, when more and more

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
Think about a concrete example: if from one source the system gets P(A--B) = 0.9, and P(P(A--B) = 0.9) = 0.5, while from another source P(A--B) = 0.2, and P(P(A--B) = 0.2) = 0.7, then what will be the conclusion when the two sources are considered together? There are many approaches to

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Pei:In a broad sense, formal logic is nothing but domain-independent and justifiable data manipulation schemes. I haven't seen any argument for why AI cannot be achieved by implementing that Have you provided a single argument as to how logic *can* achieve AI - or to be more precise,

Re: [agi] NARS probability

2008-09-20 Thread Pei Wang
I didn't know this paper, but I do know approaches based on the principle of maximum/optimum entropy. They usually requires much more information (or assumptions) than what is given in the following example. I'd be interested to know what the solution they will suggest for such a situation. Pei

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
The approach in that paper doesn't require any special assumptions, and could be applied to your example, but I don't have time to write up an explanation of how to do the calculations ... you'll have to read the paper yourself if you're curious ;-) That approach is not implemented in PLN right

Re: [agi] NARS probability

2008-09-20 Thread Pei Wang
I found the paper. As I guessed, their update operator is defined on the whole probability distribution function, rather than on a single probability value of an event. I don't think it is practical for AGI --- we cannot afford the time to re-evaluate every belief on each piece of new evidence.

Re: [agi] NARS probability

2008-09-20 Thread Abram Demski
You are right in what you say about (1). The truth is, my analysis is meant to apply to NARS operating with unrestricted time and memory resources (which of course is not the point of NARS!). So, the question is whether NARS approaches a probability calculation as it is given more time to use all

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben: Mike: (And can you provide an example of a single surprising metaphor or analogy that have ever been derived logically? Jiri said he could - but didn't.) It's a bad question -- one could derive surprising metaphors or analogies by random search, and that wouldn't prove anything

Re: [agi] NARS probability

2008-09-20 Thread Pei Wang
On Sat, Sep 20, 2008 at 11:02 PM, Abram Demski [EMAIL PROTECTED] wrote: You are right in what you say about (1). The truth is, my analysis is meant to apply to NARS operating with unrestricted time and memory resources (which of course is not the point of NARS!). So, the question is whether

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 10:32 PM, Pei Wang [EMAIL PROTECTED] wrote: I found the paper. As I guessed, their update operator is defined on the whole probability distribution function, rather than on a single probability value of an event. I don't think it is practical for AGI --- we cannot

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
Mike, I understand that my task is to create an AGI system, and I'm working on it ... The fact that my in-development, partial AGI system has not yet demonstrated advanced intelligence, does not imply that it will not do so once completed. No, my AGI system has not yet discovered surprising

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
and not to forget... SATAN GUIDES US TELEPATHICLY THROUGH RECTAL THERMOMETERS. WHY DO YOU THINK ABOUT META-REASONING? On Sat, Sep 20, 2008 at 11:38 PM, Ben Goertzel [EMAIL PROTECTED] wrote: Mike, I understand that my task is to create an AGI system, and I'm working on it ... The fact

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben, Not one metaphor below works. You have in effect accepted the task of providing a philosophy and explanation of your AGI and your logic - you have produced a great deal of such stuff (quite correctly). But none of it includes the slightest explanation of how logic can produce AGI - or,

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
Mike If you want an explanation of why I think my AGI system will work, please see http://opencog.org/wiki/OpenCogPrime:WikiBook The argument is complex and technical and it would not be a good use of my time to recapitulate it via email!! Personally I do think the metaphor COWS FLY LIKE

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben, Just to be clear, when I said no argument re how logic will produce AGI.. I meant, of course, as per the previous posts, ..how logic will [surprisingly] cross domains etc. That, for me, is the defining characteristic of AGI. All the rest is narrow AI.