Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread j.k.
On 10/15/2008 08:01 AM,, Ben Goertzel wrote: ... It seems to me there are two types of conversations here: 1) Discussions of how to design or engineer AGI systems, using current computers, according to designs that can feasibly be implemented by moderately-sized groups of people 2)

Re: [agi] What is Friendly AI?

2008-09-03 Thread j.k.
On 09/03/2008 05:52 PM, Terren Suydam wrote: I'm talking about a situation where humans must interact with the FAI without knowledge in advance about whether it is Friendly or not. Is there a test we can devise to make certain that it is? This seems extremely unlikely. Consider that

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 10:09 AM, Abram Demski wrote: I like that argument. Also, it is clear that humans can invent better algorithms to do specialized things. Even if an AGI couldn't think up better versions of itself, it would be able to do the equivalent of equipping itself with fancy calculators.

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 01:29 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: An AGI with an intelligence the equivalent of a 99.-percentile human might be creatable, recognizable and testable by a human (or group of humans) of comparable intelligence. That same AGI at some later

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-29 Thread j.k.
On 08/29/2008 03:14 PM, William Pearson wrote: 2008/8/29 j.k.[EMAIL PROTECTED]: ... The human-level AGI running a million times faster could simultaneously interact with tens of thousands of scientists at their pace, so there is no reason to believe it need be starved for interaction

Re: RSI (was Re: Goedel machines (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)))

2008-08-28 Thread j.k.
On 08/28/2008 04:47 PM, Matt Mahoney wrote: The premise is that if humans can create agents with above human intelligence, then so can they. What I am questioning is whether agents at any intelligence level can do this. I don't believe that agents at any level can recognize higher

[agi] Paper rec: Complex Systems: Network Thinking

2008-06-29 Thread j.k.
While searching for information about the Mitchell book to be published in 2009 http://www.amazon.com/Core-Ideas-Sciences-Complexity/dp/0195124413/, which was mentioned in passing by somebody in the last few days, I found a paper by the same author that I enjoyed reading and that will probably

Re: [agi] Did this message get completely lost?

2008-06-02 Thread j.k.
On 06/01/2008 09:29 PM,, John G. Rose wrote: From: j.k. [mailto:[EMAIL PROTECTED] On 06/01/2008 03:42 PM, John G. Rose wrote: A rock is conscious. Okay, I'll bite. How are rocks conscious under Josh's definition or any other non-LSD-tripping-or-batshit-crazy definition

Re: [agi] Did this message get completely lost?

2008-06-01 Thread j.k.
On 06/01/2008 03:42 PM, John G. Rose wrote: A rock is conscious. Okay, I'll bite. How are rocks conscious under Josh's definition or any other non-LSD-tripping-or-batshit-crazy definition? --- agi Archives:

Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.
On 03/09/2008 10:20 AM,, Mark Waser wrote: My claim is that my view is something better/closer to the true CEV of humanity. Why do you believe it likely that Eliezer's CEV of humanity would not recognize your approach is better and replace CEV1 with your improved CEV2, if it is actually

Re: [agi] Recap/Summary/Thesis Statement

2008-03-09 Thread j.k.
On 03/09/2008 02:43 PM, Mark Waser wrote: Why do you believe it likely that Eliezer's CEV of humanity would not recognize your approach is better and replace CEV1 with your improved CEV2, if it is actually better? If it immediately found my approach, I would like to think that it would do so

Re: [agi] Recap/Summary/Thesis Statement

2008-03-08 Thread j.k.
On 03/07/2008 05:28 AM, Mark Waser wrote: */Attractor Theory of Friendliness/* There exists a describable, reachable, stable attractor in state space that is sufficiently Friendly to reduce the risks of AGI to acceptable levels I've just carefully reread Eliezer's CEV

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 08:09 AM,, Mark Waser wrote: There is one unique attractor in state space. No. I am not claiming that there is one unique attractor. I am merely saying that there is one describable, reachable, stable attractor that has the characteristics that we want. There are *clearly*

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 03:20 PM,, Mark Waser wrote: For there to be another attractor F', it would of necessity have to be an attractor that is not desirable to us, since you said there is only one stable attractor for us that has the desired characteristics. Uh, no. I am not claiming that there is

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 08:32 AM,, Matt Mahoney wrote: --- Mark Waser [EMAIL PROTECTED] wrote: And thus, we get back to a specific answer to jk's second question. *US* should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define *us* because I DECLARE

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/05/2008 05:04 PM,, Mark Waser wrote: And thus, we get back to a specific answer to jk's second question. *US* should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define *us* because I DECLARE that it should be assumed to include current

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here are some thoughts that I think follow from what I understand of your position so far. But I may be wildly mistaken. Please correct my mistakes. There is one unique attractor in state space. Any individual of a species that

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 02:18 PM,, Mark Waser wrote: I wonder if this is a substantive difference with Eliezer's position though, since one might argue that 'humanity' means 'the [sufficiently intelligent and sufficiently ...] thinking being' rather than 'homo sapiens sapiens', and the former would of

Re: [agi] What should we do to be prepared?

2008-03-05 Thread j.k.
On 03/05/2008 12:36 PM,, Mark Waser wrote: snip... The obvious initial starting point is to explicitly recognize that the point of Friendliness is that we wish to prevent the extinction of the *human race* and/or to prevent many other horrible nasty things that would make *us* unhappy.

Re: [agi] Applicable to Cyc, NARS, ATM others?

2008-02-14 Thread j.k.
On 02/14/2008 06:32 AM, Mike Tintner wrote: The Semantic Web, Syllogism, and Worldview First published November 7, 2003 on the Networks, Economics, and Culture mailing list. Clay Shirky For an alternate perspective and critique of Shirky's rant, see Paul Ford's A Response to Clay

Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
everything I've said. -j.k. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244id_secret=78280299-9bd5b2

Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
Hi Stan, On 12/20/2007 07:44 PM,, Stan Nilsen wrote: I understand that it's all uphill to defy the obvious. For the record, today I do believe that intelligence way beyond human intelligence is not possible. I understand that this is your belief. I was trying to challenge you to make a strong

Re: How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread j.k.
On 12/20/2007 07:56 PM,, Richard Loosemore wrote: I think these are some of the most sensible comments I have heard on this list for a while. You are not saying anything revolutionary, but it sure is nice to hear someone holding out for common sense for a change! Basically your point is that