Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-09 Thread Charles D Hixson
Shane Legg wrote: :-) No offence taken, I was just curious to know what your position was. I can certainly understand people with a practical interest not having time for things like AIXI. Indeed as I've said before, my PhD is in AIXI and related stuff, and yet my own AGI project is based on

[singularity] Scenarios for a simulated universe

2007-03-09 Thread Keta Meme
i am familiar with 'simulation argument', various modes of philosophical/epistemological thinking about the nature of reality and simulation, and the previous replies to this mailing list. so am i prepared to share some brief words about the subject??? X-P do you ever get the sense that you are

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Shane Legg
Ben, So you really think AIXI is totally useless? I haven't been reading Richard's comments, indeed I gave up reading his comments some time before he got himself banned from sl4, however it seems that you in principle support what he's saying. I just checked his posts and can see why they

Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread Shane Legg
Ben started a new thread about AIXI so I'll switch to there to keep this discussion in the same place and in sync with the subject line... Shane On 3/7/07, Mitchell Porter [EMAIL PROTECTED] wrote: From: Shane Legg [EMAIL PROTECTED] For sure. Indeed my recent paper on whether there exists

Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread Russell Wallace
On 3/7/07, Eugen Leitl [EMAIL PROTECTED] wrote: I realize that this is sarcasm, but detecting the mere presence of a species (nevermind their critical acclaim) from a trajectory, then rather give me the infinite simians, and I will personally look for Shakespeare sonnets in them. And I've

Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread deering
Shane Legg recently (3-5-07) wrote: ...if you're not careful you may well define intelligence in such a way that humans don't have it either. I think it would be a serious mistake to degrade the definition of intelligence to the point that it included humans. Mike Deering, General Editor,

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou
On 3/5/07, John Ku [EMAIL PROTECTED] wrote: On 3/4/07, Ben Goertzel [EMAIL PROTECTED] wrote: Richard, I long ago proposed a working definition of intelligence as Achieving complex goals in complex environments. I then went through a bunch of trouble to precisely define all the component

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou
On 3/6/07, John Ku [EMAIL PROTECTED] wrote: On 3/5/07, Stathis Papaioannou [EMAIL PROTECTED] wrote: You seem to be equating intelligence with consciousness. Ned Block also seems to do this in his original paper. I would prefer to reserve intelligence for third person observable behaviour,

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Shane Legg
Ben, Would such an AIXI system have feelings or awareness? I have no idea, indeed I don't even know how to define such things outside of my own subject experience of them... Or to put it another way, if defining intelligence is hard, then defining some of these other things seems to be even

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Russell Wallace
On 3/5/07, Shane Legg [EMAIL PROTECTED] wrote: Would such an AIXI system have feelings or awareness? I have no idea, indeed I don't even know how to define such things outside of my own subject experience of them... I don't know how to define them either, but I can answer your question.

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread deering
Russell Wallace writes: What programs of the please run this on an infinite computer type (AIXI, Blockhead, a bunch of others with acronyms and cutesy names that I don't remember) actually amount to is suppose I am Jehovah, then I will create all possible universes [or all universes of a

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Bruce LaDuke
Managing Director Instant Innovation, LLC Indianapolis, IN [EMAIL PROTECTED] http://www.hyperadvance.com Original Message Follows From: deering [EMAIL PROTECTED] Reply-To: singularity@v2.listbox.com To: singularity@v2.listbox.com Subject: Re: [singularity] Scenarios for a simulated

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Mitchell Porter
From: deering [EMAIL PROTECTED] It should be a fairly obvious implementation of a nested quantum computer to run any of these infinite processing programs. We will soon have oracle type computers that can answer any question with the reservation that the top level of the nest will have to

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou
On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote: You radically overstate the expected capabilities of quantum computers. They can't even do NP-complete problems in polynomial time. http://scottaaronson.com/blog/?p=208 What about a computer (classical will do) granted an infinity of

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney
--- Stathis Papaioannou [EMAIL PROTECTED] wrote: On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote: You radically overstate the expected capabilities of quantum computers. They can't even do NP-complete problems in polynomial time. http://scottaaronson.com/blog/?p=208 What

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical idealization of these terms correspond with the real thing, ... so

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore
Ben Goertzel wrote: Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical idealization of

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel
Richard, I long ago proposed a working definition of intelligence as Achieving complex goals in complex environments. I then went through a bunch of trouble to precisely define all the component terms of that definition; you can consult the Appendix to my 2006 book The Hidden Pattern

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Bruce LaDuke
.listbox.com Subject: Re: [singularity] Scenarios for a simulated universe Date: Sun, 04 Mar 2007 14:26:33 -0500 Richard, I long ago proposed a working definition of intelligence as Achieving complex goals in complex environments. I then went through a bunch of trouble to precisely define all

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Jef Allbright
On 3/4/07, Matt Mahoney wrote: What does the definition of intelligence have to do with AIXI? AIXI is an optimization problem. The problem is to maximize an accumulated signal in an unknown environment. AIXI says the solution is to guess the simplest explanation for past observation (Occam's

Re: [singularity] Scenarios for a simulated universe

2007-03-03 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: What I wanted was a set of non-circular definitions of such terms as intelligence and learning, so that you could somehow *demonstrate* that your mathematical idealization of these terms correspond with the real thing, ... so that we could

Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Matt Mahoney
--- Ben Goertzel [EMAIL PROTECTED] wrote: Matt, I really don't see why you think Hutter's work shows that Occam's Razor holds in any context except AI's with unrealistically massive amounts of computing power (like AIXI and AIXItl) In fact I think that it **does** hold in other contexts

Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Richard Loosemore
Matt, When you said (in the text below): In every practical case of machine learning, whether it is with decision trees, neural networks, genetic algorithms, linear regression, clustering, or whatever, the problem is you are given training pairs (x,y) and you have to choose a hypothesis h

Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Jef Allbright
On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote: Second, I used the same reasoning to guess about the nature of the universe (assuming it is simulated), and the only thing we know is that shorter simulation programs are more likely than longer ones. My conclusion was that bizarre behavior or a

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Stathis Papaioannou
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: As you probably know, Hutter proved that the optimal behavior of a goal seeking agent in an unknown environment (modeled as a pair of interacting Turing machines, with the enviroment sending an additional reward signal to the agent that the agent

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Richard Loosemore
Matt Mahoney wrote: As you probably know, Hutter proved that the optimal behavior of a goal seeking agent in an unknown environment (modeled as a pair of interacting Turing machines, with the enviroment sending an additional reward signal to the agent that the agent seeks to maximize) is for the

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: As you probably know, Hutter proved that the optimal behavior of a goal seeking agent in an unknown environment (modeled as a pair of interacting Turing machines, with the enviroment sending an additional reward signal

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: What I argue is this: the fact that Occam's Razor holds suggests that the universe is a computation. Matt - Would you please clarify how/why you think B follows from A in your preceding statement? - Jef - This list is sponsored by AGIRI:

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jef Allbright [EMAIL PROTECTED] wrote: On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: What I argue is this: the fact that Occam's Razor holds suggests that the universe is a computation. Matt - Would you please clarify how/why

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Matt Mahoney
--- Jef Allbright [EMAIL PROTECTED] wrote: On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jef Allbright [EMAIL PROTECTED] wrote: On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: What I argue is this: the fact that Occam's Razor holds suggests that the universe is a

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright
On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote: --- Jef Allbright [EMAIL PROTECTED] wrote: Matt - I think this answers my question to you, at least I think I see where you're coming from. I would say that you have justification for saying that interaction with the universe

[singularity] Scenarios for a simulated universe

2007-02-28 Thread Matt Mahoney
As you probably know, Hutter proved that the optimal behavior of a goal seeking agent in an unknown environment (modeled as a pair of interacting Turing machines, with the enviroment sending an additional reward signal to the agent that the agent seeks to maximize) is for the agent to guess at