To All,
I have posted plenty about statements of ignorance, our probable inability
to comprehend what an advanced intelligence might be thinking, heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.
People often say things that
sTEVE:I have posted plenty about statements of ignorance, our probable
inability to comprehend what an advanced intelligence might be thinking,
What will be the SIMPLEST thing that will mark the first sign of AGI ? - Given
that there are zero but zero examples of AGI.
Don't you think it would
Jim: So, did Solomonoff's original idea involve randomizing whether the
next bit would be a 1 or a 0 in the program?
Abram: Yep.
I meant, did Solomonoff's original idea involve randomizing whether the next
bit in the program's that are originally used to produce the *prior
probabilities*
Mike,
Your reply flies in the face of two obvious facts:
1. I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future
Mike Tintner wrote:
What will be the SIMPLEST thing that will mark the first sign of AGI ? -
Given
that there are zero but zero examples of AGI.
Machines have already surpassed human intelligence. If you don't think so, try
this IQ test. http://mattmahoney.net/iq/
Or do you prefer to
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text. Language is both
intelligence enhancing and limiting. Human language is a protocol between
agents. So there is minimalist data transfer, I had no choice but to ...
is a
Maybe you could give me one example from the history of technology where
machines ran before they could walk? Where they started complex rather than
simple? Or indeed from evolution of any kind? Or indeed from human
development? Where children started doing complex mental operations like
I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.
The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a
I meant:
Did Solomonoff's original idea use randomization to determine the bits of
the programs that are used to produce the *prior probabilities*? I think
that the answer to that is obviously no. The randomization of the next bit
would used in the test of the prior probabilities as done using a
Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1
On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:
Ian Parker wrote
I would like your
opinion on *proofs* which involve an unproven
John,
Congratulations, as your response was the only one that was on topic!!!
On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.comwrote:
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text.
Definitely, as
Jim, see http://www.scholarpedia.org/article/Algorithmic_probability
I think this answers your questions.
-- Matt Mahoney, matmaho...@yahoo.com
From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Fri, August 6, 2010 2:18:09 PM
Subject: Re:
This is much more interesting in the context of Evolution than it is for
the creation of AGI. Point is that all the things that have ben done would
have been done (much more simply in fact) from straightforward narrow
programs. However it demonstrates the early multicelluar organisms of the
Pre
This is on the surface interesting. But I'm kinda dubious about it.
I'd like to know exactly what's going on - who or what (what kind of organism)
is solving what kind of problem about what? The exact nature of the problem and
the solution, not just a general blurb description.
If you follow
Jim,
From the article Matt linked to, specifically see the line:
As [image: p] is itself a binary string, we can define the discrete
universal a priori probability, [image: m(x)], to be the probability that
the output of a universal prefix Turing machine [image: U] is [image:
x]when provided
On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
*So, why computer vision? Why can't we just enter knowledge manually?
*a) The knowledge we require for AI to do what we want is vast and complex
and we can prove that it is completely ineffective to enter the knowledge we
On Fri, Aug 6, 2010 at 7:37 PM, Jim Bromer jimbro...@gmail.com wrote:
On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
*So, why computer vision? Why can't we just enter knowledge manually?
*
a) The knowledge we require for AI to do what we want is vast and complex
1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other
2) Learning about the world won't cut it - vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?
3) Breaking
David,
Seems like a reasonable argument to me. I agree with the emphasis on
acquiring knowledge. I agree that tackling language first is not the easiest
path. I agree with the comments on compositionality of knowledge the
regularity of the vast majority of the environment.
Vision seems like a
On Fri, Aug 6, 2010 at 8:22 PM, Abram Demski abramdem...@gmail.com wrote:
(Without this sort of generality, your approach seems restricted to
gathering knowledge about whatever events unfold in front of a limited
quantity of high-quality camera systems which you set up. To be honest, the
-Original Message-
From: Ian Parker [mailto:ianpark...@gmail.com]
The Turing test is not in fact a test of intelligence, it is a test of
similarity with
the human. Hence for a machine to be truly Turing it would have to make
mistakes. Now any useful system will be made as
21 matches
Mail list logo