Mike,
On 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote:
Steve: If I were selling a technique like Buzan then I would agree.
However, someone selling a tool to merge ALL techniques is in a different
situation, with a knowledge engine to sell.
The difference AFAICT is that Buzan had an
Hmm. My bot mostly repeats what it hears.
bot Monie: haha. r u a bot ?
bot cyberbrain: not to mention that in a theory complex enough with
a large enough number of parameters. one can interpret anything.
even things that are completely physically inconsistent with each
other. i suggest actually
Ok, most of its replies here seem to be based on the first word of
what it's replying to. But it's really capable of more lateral
connections.
wijnand yeah i use it to add shortcuts for some menu functions i use a lot
bot wijnand: TOMACCO!!!
On 9/21/08, Eric Burton [EMAIL PROTECTED] wrote:
I've started to wander away from my normal sub-cognitive level of AI,
and have been thinking about reasoning systems. One scenario I have
come up with is the, foresight of extra knowledge, scenario.
Suppose Alice and Bob have decided to bet $10 on the weather in the 10
days time in alaska whether
There are several issues involved in this example, though the basic is:
(1) There is a decision to be made before a deadline (after 10 days),
let's call it goal A, written as A?
(2) At the current moment, the available information is not enough to
support a confident conclusion, that is, the
Hmm... I didn't mean infinite evidence, only infinite time and space
with which to compute the consequences of evidence. But that is
interesting too.
The higher-order probabilities I'm talking about introducing do not
reflect inaccuracy at all. :)
This may seem odd, but it seems to me to follow
When working on your new proposal, remember that in NARS all
measurements must be based on what the system has --- limited evidence
and resources. I don't allow any objective probability that only
exists in a Platonic world or the infinite future.
Pei
On Sun, Sep 21, 2008 at 1:53 PM, Abram
--- On Sat, 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote:
Matt: A more appropriate metaphor is that text compression
is the altimeter
by which we measure progress. (1)
Matt,
Now that sentence is a good example of general intelligence
- forming a new
connection between domains -
--- On Sat, 9/20/08, Ben Goertzel [EMAIL PROTECTED] wrote:
A more appropriate metaphor is that text compression is the altimeter by
which we measure progress.
An extremely major problem with this idea is that, according to this
altimeter, gzip is vastly more intelligent than a chimpanzee or a
Now if you want to compare gzip, a chimpanzee, and a 2 year old child using
language prediction as your IQ test, then I would say that gzip falls in the
middle. A chimpanzee has no language model, so it is lowest. A 2 year old
child can identify word boundaries in continuous speech, can
--- On Sat, 9/20/08, Pei Wang [EMAIL PROTECTED] wrote:
Think about a concrete example: if from one source the
system gets
P(A--B) = 0.9, and P(P(A--B) = 0.9) = 0.5, while
from another source
P(A--B) = 0.2, and P(P(A--B) = 0.2) = 0.7, then
what will be the
conclusion when the two sources are
--- On Sat, 9/20/08, Pei Wang [EMAIL PROTECTED] wrote:
Matt,
I really hope NARS can be simplified, but until you give me the
details, such as how to calculate the truth value in your converse
rule, I cannot see how you can do the same things with a simpler
design.
You're right. Given
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Hmmm I am pretty strongly skeptical of intelligence tests that do not
measure the actual functionality of an AI system, but rather measure the
theoretical capability of the structures or processes or data inside the
system...
The
On Mon, Sep 22, 2008 at 2:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
I'm not building AGI. (That is a $1 quadrillion problem).
How do you estimate your confidence in this assertion that developing
AGI (singularity capable) requires this insane effort (odds of the bet
you'd take for it)? This
On Mon, Sep 22, 2008 at 8:29 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
How do you estimate your confidence in this assertion that developing
AGI (singularity capable) requires this insane effort (odds of the bet
you'd take for it)? This is an easily falsifiable statement, if a
small group
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'm not building AGI. (That is a $1 quadrillion
problem).
How do you estimate your confidence in this assertion that
developing
AGI (singularity capable) requires this insane effort (odds
of the bet
you'd take for it)? This
On Mon, Sep 22, 2008 at 3:37 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Another possibility is that we will discover some low cost shortcut to AGI.
Recursive self improvement is one example, but I showed that this won't work.
(See http://www.mattmahoney.net/rsi.pdf ). So far no small group (or
I'm not building AGI. (That is a $1 quadrillion problem). I'm studying
algorithms for learning language. Text compression is a useful tool for
measuring progress (although not for vision).
OK, but the focus of this list is supposed to be AGI, right ... so I suppose
I should be forgiven for
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Hence the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How sure are you in this
assertion?
I can't prove it, but the fact that
That seems a pretty sketchy anti-AGI argument, given the coordinated
advances in computer hardware, computer science and cognitive science during
the last couple decades, which put AGI designers in a quite different
position from where we were in the 80's ...
ben
On Sun, Sep 21, 2008 at 7:56 PM,
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may be very valuable for other purposes...
It is a way to measure progress in language modeling, which is an important
component of AGI
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
That seems a pretty sketchy anti-AGI argument, given the coordinated advances
in computer hardware, computer science and cognitive science during the last
couple decades, which put AGI designers in a quite different position from
where
On Sun, Sep 21, 2008 at 4:15 PM, William Pearson [EMAIL PROTECTED] wrote:
2008/9/21 Pei Wang [EMAIL PROTECTED]:
There are several issues involved in this example, though the basic is:
(1) There is a decision to be made before a deadline (after 10 days),
let's call it goal A, written as A?
yes, but your cost estimate is based on some very odd and specialized
assumptions regarding AGI architecture!!!
On Sun, Sep 21, 2008 at 8:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
That seems a pretty sketchy anti-AGI argument,
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may be very valuable for other purposes...
It is a way to
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Hence the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
So, do you think that there is at least, say, 99%
probability that AGI
won't be developed by a reasonably small group in the
next 30 years?
Yes, but in the way that the internet was not developed by a small group. A
small number
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
yes, but your cost estimate is based on some very odd and specialized
assumptions regarding AGI architecture!!!
As I explained, my cost estimate is based on the value of the global economy
and the assumption that AGI would automate it
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
So, do you think that there is at least, say, 99%
probability that AGI
won't be developed by a reasonably small group in the
next 30 years?
Yes, but in the
Attached is my attempt at a probabilistic justification for NARS.
--Abram
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Text compression is IMHO a terrible way of measuring incremental progress
toward AGI. Of course it may
On Mon, Sep 22, 2008 at 5:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
As I explained, my cost estimate is based on the value of the
global economy and the assumption that AGI would automate it
by replacing human labor.
The cost of (in modern sense) automation of agriculture isn't equal to
I don't see how you get the NARS induction and abduction truth value
formulas out of this, though...
ben g
On Sun, Sep 21, 2008 at 10:10 PM, Abram Demski [EMAIL PROTECTED]wrote:
Attached is my attempt at a probabilistic justification for NARS.
--Abram
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Sep 22, 2008 at 5:54 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
As I explained, my cost estimate is based on the value of the
global economy and the assumption that AGI would automate it
by replacing human labor.
Here's the thing ... it might wind up costing trillions of dollars to
ultimately replace all aspects of the human economy with AGI-based labor ...
but this cost doesn't need to occur **before** the first human-level AGI is
created ...
We'll create the human-level AGI first, without such a high
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Vladimir Nesov
[EMAIL PROTECTED] wrote:
So, do you think that there is at least, say, 99%
probability that AGI
won't be developed
The calculation in which I sum up a bunch of pairs is equivalent to
doing NARS induction + abduction with a final big revision at the end
to combine all the accumulated evidence. But, like I said, I need to
provide a more explicit justification of that calculation...
--Abram
On Sun, Sep 21, 2008
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED]wrote:
The calculation in which I sum up a bunch of pairs is equivalent to
doing NARS induction + abduction with a final big revision at the end
to combine all the accumulated evidence. But, like I said, I need to
provide a more
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Sun, 9/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Here's the thing ... it might wind up costing trillions of dollars to
ultimately replace all aspects of the human economy with AGI-based labor ...
but this cost doesn't need to occur **before** the
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Training will be the overwhelming cost of AGI. Any language model
improvement will help reduce this cost.
How do you figure that training will cost more than designing, building and
operating AGIs? Unlike a training a
40 matches
Mail list logo