Re: [singularity] The humans are dead...

2007-05-29 Thread Shane Legg
Keith, Shane, you might not believe this, but I'm on your side. You might be on my side, but are you on humanities side? What I mean is: Sure, if I avoid debates about issues that I think are going to be very important then that might save my skin in the future if somebody wants to take my

Re: [singularity] The humans are dead...

2007-05-28 Thread Shane Legg
On 5/27/07, Richard Loosemore [EMAIL PROTECTED] wrote: What possible reason do we have for assuming that the badness of killing a creature is a linear, or even a monotonic, function of the intelligence/complexity/consciousness of that creature? You produced two data points on the graph, and

Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Shane Legg
Tom, I'm sure any computer scientist worth their salt could use a computer to write up random ten-billion-byte-long algorithms that would do exactly nothing. Defining intelligence that way because it's mathematically neat is just cheating Let's assume that you can make a very long program

Re: [singularity] ESSAY: Artificial intelligence within our lifetime?

2007-03-23 Thread Shane Legg
Kaj, As your essay talks a bit about brain imaging, I thought I might mention to you one of the cool relatively new techniques being used in this area. What they do is to label neurons with a calcium sensitive indicator in vivo and then watch with two-photon microscopy for fluorescence as

Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Shane Legg
Ben, So you really think AIXI is totally useless? I haven't been reading Richard's comments, indeed I gave up reading his comments some time before he got himself banned from sl4, however it seems that you in principle support what he's saying. I just checked his posts and can see why they

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Shane Legg
On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote: It's about time that some else said that the AIXI emperor has no clothes. Infinite computing power arguments prove **nothing**. That depends on exactly what you mean by prove nothing. For example, you can use the AIXI model to prove that no

Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Shane Legg
On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote: AIXI certainly doesn't prove that AGI is possible. I agree. The human brain is what makes me think that it's possible. Shane - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread Shane Legg
Ben started a new thread about AIXI so I'll switch to there to keep this discussion in the same place and in sync with the subject line... Shane On 3/7/07, Mitchell Porter [EMAIL PROTECTED] wrote: From: Shane Legg [EMAIL PROTECTED] For sure. Indeed my recent paper on whether there exists

Re: [singularity] Uselessness of AIXI

2007-03-07 Thread Shane Legg
In reply to a few previous comments: AIXI and Solomonoff induction both use infinite computer power and thus clearly are not practical in any sense. AIXI(tl) is finite but not really much better as the computation time is still out of this universe. This much I think we all agree on. Calling

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Shane Legg
Ben, Would such an AIXI system have feelings or awareness? I have no idea, indeed I don't even know how to define such things outside of my own subject experience of them... Or to put it another way, if defining intelligence is hard, then defining some of these other things seems to be even

Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Shane Legg
I saw a talk about a year or two ago where one of the Google founders was asked if they had projects to build general purpose artificial intelligence. He answered that they did not have such a project at the company level, however they did have many AI people in the company, some of whom where

Re: Re: [singularity] Is Friendly AI Bunk?

2006-09-14 Thread Shane Legg
On 9/14/06, Nick Hay [EMAIL PROTECTED] wrote: How is this weight defined, or is it informal?In the paper by Chaitin that I quoted, it is informal. To save people findingit, the quote reads, ...if one has ten pounds of axioms and a twenty-poundtheorem, then that theorem cannot be derived from