Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-31 Thread Mark Waser
Mark, my point is that while in the past evolution did the choosing, now it's *we* who decide, But the *we* who is deciding was formed by evolution. Why do you do *anything*? I've heard that there are four basic goals that drive every decision: safety, feeling good, looking good, and being

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Vladimir Nesov
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote: Ethics only becomes snarled when one is unwilling to decide/declare what the goal of life is. Extrapolated Volition comes down to a homunculus depending upon the definition of wiser or saner. Evolution has decided what the goal

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Mark Waser
? - Original Message - From: Vladimir Nesov [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Wednesday, January 30, 2008 2:14 PM Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote: Ethics only becomes snarled when one

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread aiguy
Richard Hollerith said: If I am found dead with a bag over my head attached to helium or natural gas, please investigate the possibility that it was a murder made to look like a suicide. -- Richard Hollerith http://dl4.jottit.com Same here Richard. Nitrous Oxide would

[agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Richard Hollerith
If I am found dead with a bag over my head attached to helium or natural gas, please investigate the possibility that it was a murder made to look like a suicide. -- Richard Hollerith http://dl4.jottit.com - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Joshua Fox
When transhumanists talk about indefinite life extension, they often take care to say it's optional to forestall one common objection. Yet I feel that most suicides we see should have been prevented -- that the person should have been taken into custody and treated if possible, even against their

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Mark Waser
: [agi] OpenMind, MindPixel founders both commit suicide When transhumanists talk about indefinite life extension, they often take care to say it's optional to forestall one common objection. Yet I feel that most suicides we see should have been prevented -- that the person should have been

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts of software that satisfies it. I can't take a piece of software

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote: When your computer can write and debug software faster and more accurately than you can, then you should worry. A tool that could generate computer code from formal specifications would be a wonderful thing, but not an autonomous

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote: On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote: On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: You can try checking out for example this paper (link from LtU discussion), which presents a rather powerful language for describing terminating programs:

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Bob Mottram
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote: I can take even external arbitrary program (say, a Turing machine that I can't check in general case), place it on a dedicated tape in UTM, and add control for termination, so that if it doesn't terminate in 10^6 tacts, it will be

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Consider the following subset of possible requirements: the program is correct if and only if it halts. It's a perfectly valid requirement, and I can write all sorts

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is undecidable whether a program satisfies the requirements of a formal specification, which is the same as saying that it is undecidable whether two programs are equivalent.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Kaj Sotala
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote: Theoretically yes, but behind my comment was a deeper analysis (which I have posted before, I think) according to which it will actually be very difficult for a negative-outcome singularity to occur. I was really trying to make the point

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is easy to construct programs that you can prove halt or don't halt. There is no procedure to verify that a program is equivalent to a formal specification (another program). Suppose there was. Then I can take any program P

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote: It is undecidable whether a program satisfies the requirements of a formal specification, which is the same as saying that it is undecidable whether two programs are equivalent. The halting problem reduces to it. Yes it is, if

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: Exactly. That's why it can't hack provably correct programs. Which is useless because you can't write provably correct programs that aren't extremely simple. *All* nontrivial properties of programs are undecidable.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-28 Thread Lukasz Stafiniak
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: Exactly. That's why it can't hack provably correct programs. Which is useless because you can't write provably correct programs that aren't extremely simple. *All* nontrivial

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically solved by AGI. The problem will actually get worse, because complex systems are harder to get right.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically solved by AGI. The problem will actually get worse, because

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Vladimir Nesov
On Jan 28, 2008 1:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread William Pearson
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Software correctness is undecidable -- the halting problem reduces to it. Computer security isn't going to be magically

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Ben Goertzel
Google already knows more than any human, This is only true, of course, for specific interpretations of the word know ... and NOT for the standard ones... and can retrieve the information faster, but it can't launch a singularity. Because, among other reasons, it is not an intelligence, but

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: No computer is going to start writing and debugging software faster and more accurately than we can UNLESS we design it to do so, and during the design process we will have ample opportunity to ensre that the machine will

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-26 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Maybe you can program it with a moral code, so it won't write malicious code. But the two sides of the security problem require almost identical skills. Suppose you ask the AGI to examine some operating system or

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: This whole scenario is filled with unjustified, unexamined assumptions. For example, you suddenly say I foresee a problem when the collective computing power of the network exceeds the collective computing power of the humans

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: This whole scenario is filled with unjustified, unexamined assumptions. For example, you suddenly say I foresee a problem when the collective computing power of the network exceeds the collective computing power of the humans that administer

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: You must demonstrate some reason why the collective net of dumb computers will be intelligent: it is not enough to simply assert that they will, or might, become intelligent. The intelligence comes from an infrastructure

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: You suggest that a collection of *sub-intelligent* (this is crucial) computer programs can ad up to full intelligence just in virtue of their existence. This is not the same as a collection of *already-intelligent* humans appearing more

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-25 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: You suggest that a collection of *sub-intelligent* (this is crucial) computer programs can ad up to full intelligence just in virtue of their existence. This is not the same as a collection of *already-intelligent* humans

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Randall Randall
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions.

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Randall Randall wrote: On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. As explained in parallel post: this is a non-sequiteur. OK, consider a network of agents, such as my

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-24 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Because recursive self improvement is a competitive evolutionary process even if all agents have a common ancestor. As explained in parallel post: this is a non-sequiteur. OK, consider a network of

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Charles D Hixson
Richard Loosemore wrote: Matt Mahoney wrote: ... Matt, ... As for your larger point, I continue to vehemently disagree with your assertion that a singularity will end the human race. As far as I can see, the most likely outcome of a singularity would be exactly the opposite. Rather than

Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: Matt Mahoney wrote: ... Matt, ... As for your larger point, I continue to vehemently disagree with your assertion that a singularity will end the human race. As far as I can see, the most likely outcome of a singularity would be exactly

Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-23 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in the world in which

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney
--- Samantha Atkins [EMAIL PROTECTED] wrote: In http://www.mattmahoney.net/singularity.html I discuss how a singularity will end the human race, but without judgment whether this is good or bad. Any such judgment is based on emotion. Really? I can think of arguments why this

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Mark Waser
For example, hunger is an emotion, but the desire for money to buy food is not Hunger is a sensation, not an emotion. The sensation is unpleasant and you have a hard-coded goal to get rid of it. Further, desires tread pretty close to the line of emotions if not actually crossing over . . . .

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Richard Loosemore
Matt Mahoney wrote: --- Samantha Atkins [EMAIL PROTECTED] wrote: In http://www.mattmahoney.net/singularity.html I discuss how a singularity will end the human race, but without judgment whether this is good or bad. Any such judgment is based on emotion. Really? I can think of arguments why

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-21 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt, This usage of emotion is idiosyncratic and causes endless confusion. You're right. I didn't mean for the discussion to devolve into a disagreement over definitions. As for your larger point, I continue to vehemently disagree with your

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Eliezer S. Yudkowsky
Joshua Fox wrote: Turing also committed suicide. And Chislenko. Each of these people had different circumstances, and suicide strikes everywhere, but I wonder if there is a common thread. Ramanujan, like many other great mathematicians and achievers, died young. There are on the other hand

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Daniel Allen
Regarding the suicide rates of geniuses or those with high intelligence, I wouldn't be concerned: Berman says that the intelligence study is less useful than those that point to *risk factors like divorce or unemployment*. ''It's not as if I'm going to get more worried about my less

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Matt Mahoney
--- Mike Dougherty [EMAIL PROTECTED] wrote: On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all Turing also committed suicide. That's a

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Tyson
I believe that humans have the emotions that we do because of the environment we evolved in. The more selfish/fearful/emotional you are, the more likely you are to survive and reproduce. For humans, I think logic is a sort of tool used to help us achieve happiness. Happiness is the top-priority

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Daniel Allen
Regarding AIG research as potentially psychologically disturbing, there are so many other ways to be pscyhologically disturbed in a postmodern world that it may not matter :) It's already hard for a lot of people to have a healthy level of self-esteem or self-indentity, and nihilism is not in

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-20 Thread Samantha Atkins
On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote: --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all Turing also committed suicide. In his case I understand that the British government saw fit to sentence

[agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Eliezer S. Yudkowsky
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all I guess the moral here is Stay away from attempts to hand-program a database of common-sense assertions. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
Well, Lenat survives... But he paid people to build his database (Cyc) What's depressing is trying to get folks to build a commonsense KB for free ... then you get confronted with the absolute stupidity of what they enter, and the poverty and repetitiveness of their senses of humor... ;-p ben

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Some thoughts of mine on the article. http://streebgreebling.blogspot.com/2008/01/singh-and-mckinstry.html On 19/01/2008, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all I guess the moral here is Stay away from

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Bob Mottram
Quality is an issue, but it's really all about volume. Provided that you have enough volume the signal stands out from the noise. The solution is probably to make the knowledge capture into a game or something that people will do as entertainment. Possibly the Second Life approach will provide

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Lukasz Kaiser
This thread has nothing to do with artificial general intelligence - please close this thread. Thanks Sorry, but I have to say that I strongly disagree. There are many aspects of agi that are non-technical and organizing one's own live while doing ai is certainly one of them. That's why I think

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
On Jan 19, 2008 5:53 PM, a [EMAIL PROTECTED] wrote: This thread has nothing to do with artificial general intelligence - please close this thread. Thanks IMO, this thread is close enough to AGI to be list-worthy. It is certainly true that knowledge-entry is not my preferred approach to AGI ...

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread a
This thread has nothing to do with artificial general intelligence - please close this thread. Thanks Bob Mottram wrote: Quality is an issue, but it's really all about volume. Provided that you have enough volume the signal stands out from the noise. The solution is probably to make the

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread J Storrs Hall, PhD
Breeds There a Man...? by Isaac Asimov On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all I guess the moral here is Stay away from attempts to hand-program a database of common-sense

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Stephen Reed
, USA 78704 512.791.7860 - Original Message From: Ben Goertzel [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, January 19, 2008 3:49:55 PM Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide Well, Lenat survives... But he paid people to build his database (Cyc

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Matt Mahoney
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all Turing also committed suicide. Building a copy of your mind raises deeply troubling issues. Logically, there is no need for it to be conscious; it only needs to

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Mike Dougherty
On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all Turing also committed suicide. That's a personal solution to the Halting problem I do not plan to

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Joshua Fox
Turing also committed suicide. And Chislenko. Each of these people had different circumstances, and suicide strikes everywhere, but I wonder if there is a common thread. Joshua - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go