Keith,
Shane, you might not believe this, but I'm on your side.
You might be on my side, but are you on humanities side?
What I mean is: Sure, if I avoid debates about issues that
I think are going to be very important then that might save
my skin in the future if somebody wants to take my
On 5/27/07, Richard Loosemore [EMAIL PROTECTED] wrote:
What possible reason do we have for assuming that the badness of
killing a creature is a linear, or even a monotonic, function of the
intelligence/complexity/consciousness of that creature?
You produced two data points on the graph, and
Tom,
I'm sure any computer scientist worth their salt could
use a computer to write up random ten-billion-byte-long
algorithms that would do exactly nothing. Defining intelligence
that way because it's mathematically neat is just cheating
Let's assume that you can make a very long program
Kaj,
As your essay talks a bit about brain imaging, I thought I might mention
to you one of the cool relatively new techniques being used in this area.
What they do is to label neurons with a calcium sensitive indicator in vivo
and then watch with two-photon microscopy for fluorescence as
Ben,
So you really think AIXI is totally useless? I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying. I just checked his posts and
can see why they
On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote:
It's about time that some else said that the AIXI emperor has no clothes.
Infinite computing power arguments prove **nothing**.
That depends on exactly what you mean by prove nothing.
For example, you can use the AIXI model to prove that no
On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote:
AIXI certainly doesn't prove that AGI is possible.
I agree.
The human brain is what makes me think that it's possible.
Shane
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Ben started a new thread about AIXI so I'll switch to there to keep
this discussion in the same place and in sync with the subject line...
Shane
On 3/7/07, Mitchell Porter [EMAIL PROTECTED] wrote:
From: Shane Legg [EMAIL PROTECTED]
For sure. Indeed my recent paper on whether there exists
In reply to a few previous comments:
AIXI and Solomonoff induction both use infinite computer power
and thus clearly are not practical in any sense. AIXI(tl) is finite
but not really much better as the computation time is still out
of this universe. This much I think we all agree on.
Calling
Ben,
Would such an AIXI system have feelings or awareness?
I have no idea, indeed I don't even know how to define such
things outside of my own subject experience of them...
Or to put it another way, if defining intelligence is hard, then
defining some of these other things seems to be even
I saw a talk about a year or two ago where one of the Google founders was
asked if they had projects to build general purpose artificial intelligence.
He answered that they did not have such a project at the company level,
however they did have many AI people in the company, some of whom where
On 9/14/06, Nick Hay [EMAIL PROTECTED] wrote:
How is this weight defined, or is it informal?In the paper by Chaitin that I quoted, it is informal. To save people findingit, the quote reads, ...if
one has ten pounds of axioms and a twenty-poundtheorem, then that
theorem cannot be derived from
12 matches
Mail list logo