Re: [singularity] New list announcement: fai-logistics

2008-04-26 Thread Samantha Atkins
On Sat, Apr 26, 2008 at 7:48 AM, Thomas McCabe [EMAIL PROTECTED] wrote: On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins [EMAIL PROTECTED] wrote: Thomas McCabe wrote: Popularity is irrelevant. Popularity, of course, is not the ultimate judge of accuracy. But are you seriously claiming

Re: [singularity] Vista/AGI

2008-04-24 Thread Samantha Atkins
J. Andrew Rogers wrote: On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote: That's surely part of it ... but investors have put big $$ into much LESS mature projects in areas such as nanotech and quantum computing. This is because nanotech and quantum computing can be readily and easily

Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins
Ben Goertzel wrote: Much of this discussion is very abstract, which is I guess how you think about these issues when you don't have a specific AGI design in mind. My view is a little different. If the Novamente design is basically correct, there's no way it can possibly take thousands or

Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins
Mike Tintner wrote: Samantha:From what you said above $50M will do the entire job. If that is all that is standing between us and AGI then surely we can get on with it in all haste. Oh for gawdsake, this is such a tedious discussion. I would suggest the following is a reasonable *framework*

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Samantha  Atkins
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote: Matt Mahoney writes: Just what do you want out of AGI? Something that thinks like a person or something that does what you ask it to? The or is interesting. If it really thinks like a person and at at least human level then I doubt very

Re: [singularity] Vista/AGI

2008-04-06 Thread Samantha Atkins
Arguably many of the problems of Vista including its legendary slippages were the direct result of having thousands of merely human programmers involved. That complex monkey interaction is enough to kill almost anything interesting. shudder - samantha Panu Horsmalahti wrote: Just because

Re: [singularity] Definitions

2008-02-19 Thread Samantha Atkins
Richard Loosemore wrote: John K Clark wrote: And I will define consciousness just as soon as you define define. Ah, but that is exactly my approach. Thus, the subtitle I gave to my 2006 conference paper was Explaining Consciousness by Explaining That You Cannot Explain it, Because Your

Re: [singularity] Wrong focus?

2008-02-02 Thread Samantha Atkins
On Jan 28, 2008, at 6:43 AM, Mike Tintner wrote: Stathis: Are you simply arguing that an embodied AI that can interact with the real world will find it easier to learn and develop, or are you arguing that there is a fundamental reason why an AI can't develop in a purely virtual

Re: [singularity] Multi-Multi-....-Multiverse

2008-02-02 Thread Samantha Atkins
WTF does this have to do with AGI or Singularity? I hope the AGI gets here soon. We Stupid Monkeys get damn tiresome. - samantha On Jan 29, 2008, at 7:06 AM, gifting wrote: On 29 Jan 2008, at 14:13, Vladimir Nesov wrote: On Jan 29, 2008 11:49 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

Re: [singularity] Wrong focus?

2008-01-31 Thread Samantha Atkins
On Jan 27, 2008, at 6:18 AM, Mike Tintner wrote: Samantha: MT: You've been fooled by the puppet. It doesn't work without the puppeteer. Samantha:What's that, elan vitale, a soul, a consciousness that is independent of the puppet? It's significant that you make quite the wrong

Re: [singularity] The Extropian Creed by Ben

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 2:36 PM, Mike Tintner wrote: Gudrun: I am an artist who is interested in science, in utopia and seemingly impossible projects. I also came across a lot of artists with OC traits. ... The OCAP, actually the obsessive compulsive 'arctificial' project .. These new OCA

Re: [singularity] Wrong focus?

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 3:59 PM, Mike Tintner wrote: Ben, Thanks for reply. I think though that Samantha may be more representative - i.e. most here simply aren't interested in non- computer alternatives. Which is fine. The Singularity Institute exists for one purpose. That I point that

Re: [singularity] Wrong focus?

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 6:07 PM, Mike Tintner wrote: Tom:A computer is not disembodied any more than you are. Silicon, as a substrate, is fully equivalent to biological neurons in terms of theoretical problem-solving ability. You've been fooled by the puppet. It doesn't work without the

Re: [singularity] The Extropian Creed by Ben

2008-01-25 Thread Samantha  Atkins
On Jan 25, 2008, at 10:14 AM, Natasha Vita-More wrote: The idea of useless technology is developed in wearables more than in bioart. Steve's perspective is more political than artistic in regards to uselessness, don't you think? My paper which includes an interview with him is

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-14 Thread Samantha Atkins
Alan Grimes wrote: If it's a problem may I suggest you use a more user friendly terminal such as gnome-terminal or konsole. They have profiles that can be edited through the GUI. Not a bad suggestion, lemme see if my distro will let me kill Xterm... crap, it's depended on by xinit, which

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Samantha Atkins
Tom McCabe wrote: --- Samantha Atkins [EMAIL PROTECTED] wrote: Out of the bazillions of possible ways to configure matter only a ridiculously tiny fraction are more intelligent than a cockroach. Yet it did not take any grand design effort upfront to arrive at a world overrun when

Re: [singularity] critiques of Eliezer's views on AI

2007-07-02 Thread Samantha Atkins
Colin Tate-Majcher wrote: When you talk about uploading are you referring to creating a copy of your consciousness? If that's the case then what do you do after uploading, continue on with a mediocre existence while your cyber-duplicate shoots past you? Sure, it would have all of those

Re: [singularity] AI concerns

2007-07-02 Thread Samantha Atkins
Alan Grimes wrote: Samantha Atkins wrote: Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sure you

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Samantha Atkins
Tom McCabe wrote: --- Samantha Atkins [EMAIL PROTECTED] wrote: Tom McCabe wrote: --- Samantha Atkins [EMAIL PROTECTED] wrote: Out of the bazillions of possible ways to configure matter only a ridiculously tiny fraction are more intelligent than

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Sergey A. Novitsky wrote: Dear all, Perhaps, the questions below were already touched numerous times in the past. Could someone kindly point to discussion threads and/or articles where these concerns were addressed or discussed? Kind regards, Serge

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the personally accessible level. I understand

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sure you are aware of so why do you ask? - This list is sponsored by

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Samantha  Atkins
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote: We can't know it in the sense of a mathematical proof, but it is a trivial observation that out of the bazillions of possible ways to configure matter, only a ridiculously tiny fraction are Friendly, and so it is highly unlikely that a selected

Re: [singularity] What form will superAGI take?

2007-06-18 Thread Samantha Atkins
Mike Tintner wrote: Perhaps you've been through this - but I'd like to know people's ideas about what exact physical form a Singulitarian or near-Singul. AGI will take. And I'd like to know people's automatic associations even if they don't have thought-through ideas - just what does a

Re: [singularity] Re: Personal attacks

2007-05-29 Thread Samantha Atkins
While I have my own doubts about Eliezer's approach and likelihood of success and about the extent of his biases and limitations, I don't consider it fruitful to continue to bash Eliezer on various lists once you feel seriously slighted by him or convinced that he is hopelessly mired or

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 11:36 AM, Jonathan H. Hinck wrote: Indeed, displacement of the human labor force began since the beginning of the industrial revolution (if not before). This is the definition of technology. And, indeed, the jump form a labor-based to an automation-based economy

Re: SPAM: Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But does there need to be consensus among the experts for a public issue to be raised? Regarding other topics that have been on the public discussion palate for awhile, how often has this been the case? Perhaps with regard to issues

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Shane Legg wrote: http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 3:32 PM, Russell Wallace wrote: On 5/28/07, Shane Legg [EMAIL PROTECTED] wrote: If one accepts that there is, then the question becomes: Where should we put a super human intelligent machine on the list? If it's not at the top, then where is it and why? I don't claim to

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 4:29 PM, Joel Pitt wrote: On 5/29/07, Keith Elis [EMAIL PROTECTED] wrote: In the end, my advice is pragmatic: Anytime you post publicly on topics such as these, where the stakes are very, very high, ask yourself, Can I be taken out of context here? Is this position,

Re: [singularity] Friendly question...

2007-05-26 Thread Samantha Atkins
On May 26, 2007, at 4:16 AM, John Ku wrote: I think maximization of negative entropy is a poor goal to have. Although life perhaps has some intrinsic value, I think the primary thing we care about is not life, per se, but beings with consciousness and capable of well-being. Under your

Re: [singularity] Defining the Singularity

2006-10-23 Thread Samantha Atkins
On Oct 23, 2006, at 7:39 AM, Ben Goertzel wrote: Michael, I think your summary of the situation is in many respects accurate; but, an interesting aspect you don't mention has to do with the disclosure of technical details... In the case of Novamente, we have sufficient academic

Re: [singularity] Defining the Singularity

2006-10-21 Thread Samantha Atkins
OnOct20,2006,at2:14AM,MichaelAnissimovwrote:Sometimes, Samantha, it seems like you have little faith in anypossible form of intelligence, and that the only way for one to besafe/happy is to be isolated from everything. I sometimes get thisimpression from libertarians (not to say that I'm

Re: [singularity] Defining the Singularity

2006-10-18 Thread Samantha Atkins
On Oct 17, 2006, at 2:45 PM, Michael Anissimov wrote: Mike, On 10/10/06, deering [EMAIL PROTECTED] wrote: Going beyond the definition of Singularity we can make some educated guesses about the most likely conditions under which the Singularity will occur. Due to technological synergy,