Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-21 Thread Steve Richfield
Mike, On 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote: Steve: If I were selling a technique like Buzan then I would agree. However, someone selling a tool to merge ALL techniques is in a different situation, with a knowledge engine to sell. The difference AFAICT is that Buzan had an

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Hmm. My bot mostly repeats what it hears. bot Monie: haha. r u a bot ? bot cyberbrain: not to mention that in a theory complex enough with a large enough number of parameters. one can interpret anything. even things that are completely physically inconsistent with each other. i suggest actually

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Ok, most of its replies here seem to be based on the first word of what it's replying to. But it's really capable of more lateral connections. wijnand yeah i use it to add shortcuts for some menu functions i use a lot bot wijnand: TOMACCO!!! On 9/21/08, Eric Burton [EMAIL PROTECTED] wrote:

[agi] Waiting to gain information before acting

2008-09-21 Thread William Pearson
I've started to wander away from my normal sub-cognitive level of AI, and have been thinking about reasoning systems. One scenario I have come up with is the, foresight of extra knowledge, scenario. Suppose Alice and Bob have decided to bet $10 on the weather in the 10 days time in alaska whether

Re: [agi] Waiting to gain information before acting

2008-09-21 Thread Pei Wang
There are several issues involved in this example, though the basic is: (1) There is a decision to be made before a deadline (after 10 days), let's call it goal A, written as A? (2) At the current moment, the available information is not enough to support a confident conclusion, that is, the

Re: [agi] NARS probability

2008-09-21 Thread Abram Demski
Hmm... I didn't mean infinite evidence, only infinite time and space with which to compute the consequences of evidence. But that is interesting too. The higher-order probabilities I'm talking about introducing do not reflect inaccuracy at all. :) This may seem odd, but it seems to me to follow

Re: [agi] NARS probability

2008-09-21 Thread Pei Wang
When working on your new proposal, remember that in NARS all measurements must be based on what the system has --- limited evidence and resources. I don't allow any objective probability that only exists in a Platonic world or the infinite future. Pei On Sun, Sep 21, 2008 at 1:53 PM, Abram

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote: Matt: A more appropriate metaphor is that text compression is the altimeter by which we measure progress. (1) Matt, Now that sentence is a good example of general intelligence - forming a new connection between domains -

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote: A more appropriate metaphor is that text compression is the altimeter by which we measure progress. An extremely major problem with this idea is that, according to this altimeter, gzip is vastly more intelligent than a chimpanzee or a

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
Now if you want to compare gzip, a chimpanzee, and a 2 year old child using language prediction as your IQ test, then I would say that gzip falls in the middle. A chimpanzee has no language model, so it is lowest. A 2 year old child can identify word boundaries in continuous speech, can

Re: [agi] NARS probability

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Pei Wang [EMAIL PROTECTED] wrote: Think about a concrete example: if from one source the system gets P(A--B) = 0.9, and P(P(A--B) = 0.9) = 0.5, while from another source P(A--B) = 0.2, and P(P(A--B) = 0.2) = 0.7, then what will be the conclusion when the two sources are

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Pei Wang [EMAIL PROTECTED] wrote: Matt, I really hope NARS can be simplified, but until you give me the details, such as how to calculate the truth value in your converse rule, I cannot see how you can do the same things with a simpler design. You're right. Given

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Hmmm I am pretty strongly skeptical of intelligence tests that do not measure the actual functionality of an AI system, but rather measure the theoretical capability of the structures or processes or data inside the system... The

[agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 2:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I'm not building AGI. (That is a $1 quadrillion problem). How do you estimate your confidence in this assertion that developing AGI (singularity capable) requires this insane effort (odds of the bet you'd take for it)? This

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Trent Waddington
On Mon, Sep 22, 2008 at 8:29 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: How do you estimate your confidence in this assertion that developing AGI (singularity capable) requires this insane effort (odds of the bet you'd take for it)? This is an easily falsifiable statement, if a small group

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: I'm not building AGI. (That is a $1 quadrillion problem). How do you estimate your confidence in this assertion that developing AGI (singularity capable) requires this insane effort (odds of the bet you'd take for it)? This

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:37 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Another possibility is that we will discover some low cost shortcut to AGI. Recursive self improvement is one example, but I showed that this won't work. (See http://www.mattmahoney.net/rsi.pdf ). So far no small group (or

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
I'm not building AGI. (That is a $1 quadrillion problem). I'm studying algorithms for learning language. Text compression is a useful tool for measuring progress (although not for vision). OK, but the focus of this list is supposed to be AGI, right ... so I suppose I should be forgiven for

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Hence the question: you are making a very strong assertion by effectively saying that there is no shortcut, period (in the short-term perspective, anyway). How sure are you in this assertion? I can't prove it, but the fact that

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
That seems a pretty sketchy anti-AGI argument, given the coordinated advances in computer hardware, computer science and cognitive science during the last couple decades, which put AGI designers in a quite different position from where we were in the 80's ... ben On Sun, Sep 21, 2008 at 7:56 PM,

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Text compression is IMHO a terrible way of measuring incremental progress toward AGI.  Of course it  may be very valuable for other purposes... It is a way to measure progress in language modeling, which is an important component of AGI

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: That seems a pretty sketchy anti-AGI argument, given the coordinated advances in computer hardware, computer science and cognitive science during the last couple decades, which put AGI designers in a quite different position from where

Re: [agi] Waiting to gain information before acting

2008-09-21 Thread Pei Wang
On Sun, Sep 21, 2008 at 4:15 PM, William Pearson [EMAIL PROTECTED] wrote: 2008/9/21 Pei Wang [EMAIL PROTECTED]: There are several issues involved in this example, though the basic is: (1) There is a decision to be made before a deadline (after 10 days), let's call it goal A, written as A?

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
yes, but your cost estimate is based on some very odd and specialized assumptions regarding AGI architecture!!! On Sun, Sep 21, 2008 at 8:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: That seems a pretty sketchy anti-AGI argument,

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Text compression is IMHO a terrible way of measuring incremental progress toward AGI. Of course it may be very valuable for other purposes... It is a way to

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Hence the question: you are making a very strong assertion by effectively saying that there is no shortcut, period (in the short-term perspective, anyway). How

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: So, do you think that there is at least, say, 99% probability that AGI won't be developed by a reasonably small group in the next 30 years? Yes, but in the way that the internet was not developed by a small group. A small number

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: yes, but your cost estimate is based on some very odd and specialized assumptions regarding AGI architecture!!! As I explained, my cost estimate is based on the value of the global economy and the assumption that AGI would automate it

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: So, do you think that there is at least, say, 99% probability that AGI won't be developed by a reasonably small group in the next 30 years? Yes, but in the

[agi] re: NARS probability

2008-09-21 Thread Abram Demski
Attached is my attempt at a probabilistic justification for NARS. --Abram --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Text compression is IMHO a terrible way of measuring incremental progress toward AGI.  Of course it  may

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote: As I explained, my cost estimate is based on the value of the global economy and the assumption that AGI would automate it by replacing human labor. The cost of (in modern sense) automation of agriculture isn't equal to

Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
I don't see how you get the NARS induction and abduction truth value formulas out of this, though... ben g On Sun, Sep 21, 2008 at 10:10 PM, Abram Demski [EMAIL PROTECTED]wrote: Attached is my attempt at a probabilistic justification for NARS. --Abram

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Mon, Sep 22, 2008 at 5:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote: As I explained, my cost estimate is based on the value of the global economy and the assumption that AGI would automate it by replacing human labor.

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
Here's the thing ... it might wind up costing trillions of dollars to ultimately replace all aspects of the human economy with AGI-based labor ... but this cost doesn't need to occur **before** the first human-level AGI is created ... We'll create the human-level AGI first, without such a high

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: So, do you think that there is at least, say, 99% probability that AGI won't be developed

Re: [agi] re: NARS probability

2008-09-21 Thread Abram Demski
The calculation in which I sum up a bunch of pairs is equivalent to doing NARS induction + abduction with a final big revision at the end to combine all the accumulated evidence. But, like I said, I need to provide a more explicit justification of that calculation... --Abram On Sun, Sep 21, 2008

Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED]wrote: The calculation in which I sum up a bunch of pairs is equivalent to doing NARS induction + abduction with a final big revision at the end to combine all the accumulated evidence. But, like I said, I need to provide a more

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Matt Mahoney
-- Matt Mahoney, [EMAIL PROTECTED] --- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote: Here's the thing ... it might wind up costing trillions of dollars to ultimately replace all aspects of the human economy with AGI-based labor ... but this cost doesn't need to occur **before** the

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread David Hart
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Training will be the overwhelming cost of AGI. Any language model improvement will help reduce this cost. How do you figure that training will cost more than designing, building and operating AGIs? Unlike a training a