Shane Legg wrote:
:-)
No offence taken, I was just curious to know what your position was.
I can certainly understand people with a practical interest not having
time for things like AIXI. Indeed as I've said before, my PhD is in AIXI
and related stuff, and yet my own AGI project is based on
i am familiar with 'simulation argument', various modes of
philosophical/epistemological thinking about the nature of reality and
simulation, and the previous replies to this mailing list. so am i prepared
to share some brief words about the subject??? X-P
do you ever get the sense that you are
Ben,
So you really think AIXI is totally useless? I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying. I just checked his posts and
can see why they
Ben started a new thread about AIXI so I'll switch to there to keep
this discussion in the same place and in sync with the subject line...
Shane
On 3/7/07, Mitchell Porter [EMAIL PROTECTED] wrote:
From: Shane Legg [EMAIL PROTECTED]
For sure. Indeed my recent paper on whether there exists
On 3/7/07, Eugen Leitl [EMAIL PROTECTED] wrote:
I realize that this is sarcasm, but detecting the mere presence
of a species (nevermind their critical acclaim) from a trajectory,
then rather give me the infinite simians, and I will personally look
for Shakespeare sonnets in them.
And I've
Shane Legg recently (3-5-07) wrote: ...if you're not careful you may well
define intelligence
in such a way that humans don't have it either.
I think it would be a serious mistake to degrade the definition of intelligence
to the point that it included humans.
Mike Deering,
General Editor,
On 3/5/07, John Ku [EMAIL PROTECTED] wrote:
On 3/4/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard, I long ago proposed a working definition of intelligence as
Achieving complex goals in complex environments. I then went through
a bunch of trouble to precisely define all the component
On 3/6/07, John Ku [EMAIL PROTECTED] wrote:
On 3/5/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:
You seem to be equating intelligence with consciousness. Ned Block also
seems to do this in his original paper. I would prefer to reserve
intelligence for third person observable behaviour,
Ben,
Would such an AIXI system have feelings or awareness?
I have no idea, indeed I don't even know how to define such
things outside of my own subject experience of them...
Or to put it another way, if defining intelligence is hard, then
defining some of these other things seems to be even
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so
On 3/5/07, Shane Legg [EMAIL PROTECTED] wrote:
Would such an AIXI system have feelings or awareness?
I have no idea, indeed I don't even know how to define such
things outside of my own subject experience of them...
I don't know how to define them either, but I can answer your question.
Russell Wallace writes: What programs of the please run this on an infinite
computer type (AIXI, Blockhead, a bunch of others with acronyms and cutesy
names that I don't remember) actually amount to is suppose I am Jehovah, then
I will create all possible universes [or all universes of a
Managing Director
Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com
Original Message Follows
From: deering [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Scenarios for a simulated
From: deering [EMAIL PROTECTED]
It should be a fairly obvious implementation of a nested quantum computer
to run any of these infinite processing programs. We will soon have oracle
type computers that can answer any question with the reservation that the
top level of the nest will have to
On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:
You radically overstate the expected capabilities of quantum computers.
They
can't even do NP-complete problems in polynomial time.
http://scottaaronson.com/blog/?p=208
What about a computer (classical will do) granted an infinity of
--- Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:
You radically overstate the expected capabilities of quantum computers.
They
can't even do NP-complete problems in polynomial time.
http://scottaaronson.com/blog/?p=208
What
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your mathematical idealization of these terms correspond with the
real thing, ... so
Ben Goertzel wrote:
Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow
*demonstrate* that your mathematical idealization of
Richard, I long ago proposed a working definition of intelligence as
Achieving complex goals in complex environments. I then went
through a bunch of trouble to precisely define all the component
terms of that definition; you can consult the Appendix to my 2006
book The Hidden Pattern
.listbox.com
Subject: Re: [singularity] Scenarios for a simulated universe
Date: Sun, 04 Mar 2007 14:26:33 -0500
Richard, I long ago proposed a working definition of intelligence as
Achieving complex goals in complex environments. I then went through a
bunch of trouble to precisely define all
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your mathematical
On 3/4/07, Matt Mahoney wrote:
What does the definition of intelligence have to do with AIXI? AIXI is an
optimization problem. The problem is to maximize an accumulated signal in an
unknown environment. AIXI says the solution is to guess the simplest
explanation for past observation (Occam's
--- Richard Loosemore [EMAIL PROTECTED] wrote:
What I wanted was a set of non-circular definitions of such terms as
intelligence and learning, so that you could somehow *demonstrate*
that your mathematical idealization of these terms correspond with the
real thing, ... so that we could
--- Ben Goertzel [EMAIL PROTECTED] wrote:
Matt, I really don't see why you think Hutter's work shows that Occam's
Razor holds in any
context except AI's with unrealistically massive amounts of computing
power (like AIXI and AIXItl)
In fact I think that it **does** hold in other contexts
Matt,
When you said (in the text below):
In every practical case of machine learning, whether it is with
decision trees, neural networks, genetic algorithms, linear
regression, clustering, or whatever, the problem is you are given
training pairs (x,y) and you have to choose a hypothesis h
On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones. My conclusion was that
bizarre behavior or a
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
As you probably know, Hutter proved that the optimal behavior of a goal
seeking agent in an unknown environment (modeled as a pair of interacting
Turing machines, with the enviroment sending an additional reward signal to
the agent that the agent
Matt Mahoney wrote:
As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal to the agent that the agent seeks to
maximize) is for the
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What I argue is this: the fact that Occam's Razor holds suggests that the
universe is a computation.
Matt -
Would you please clarify how/why you think B follows from A in your
preceding statement?
- Jef
-
This list is sponsored by AGIRI:
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What I argue is this: the fact that Occam's Razor holds suggests that the
universe is a computation.
Matt -
Would you please clarify how/why
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jef Allbright [EMAIL PROTECTED] wrote:
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
What I argue is this: the fact that Occam's Razor holds suggests that
the
universe is a
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jef Allbright [EMAIL PROTECTED] wrote:
Matt -
I think this answers my question to you, at least I think I see where
you're coming from.
I would say that you have justification for saying that interaction
with the universe
As you probably know, Hutter proved that the optimal behavior of a goal seeking
agent in an unknown environment (modeled as a pair of interacting Turing
machines, with the enviroment sending an additional reward signal to the agent
that the agent seeks to maximize) is for the agent to guess at
35 matches
Mail list logo