Charles D Hixson wrote:
Richard Loosemore wrote:
Matt Mahoney wrote:
...

Matt,

...
As for your larger point, I continue to vehemently disagree with your assertion that "a singularity will end the human race".

As far as I can see, the most likely outcome of a singularity would be exactly the opposite. Rather than the end of the human race, just some changes to the human race that most people would be deleriously happy about.


Richard Loosemore

*Some* forms of the singularity would definitely end the human race. Others definitely would not, though many of them would dramatically change it. Which one will appear is not certain. Even among those forms of the singularity that are caused by an AGI, this remains true.

Theoretically yes, but behind my comment was a deeper analysis (which I have posted before, I think) according to which it will actually be very difficult for a negative-outcome singularity to occur.

I was really trying to make the point that a statement like "The singularity WILL end the human race" is completely ridiculous. There is no WILL about it.

The problem with the scenarios that people imagine (many of which are Nightmare Scenarios) is that the vast majority of them involve completely untenable assumptions. One example is the idea that there will be a situation in the world in which there are many superintelligent AGIs in the world, all competing with each other for power in a souped up version of today's arms race(s). This is extraordinarily unlikely: the speed of development would be such that one would have an extremely large time advantage (head start) on the others, and during that time it would merge the others with itself, to ensure that there was no destructive competition. Whichever way you try to think about this situation, the same conclusion seems to emerge.

This argument needs more detail, but the important point is that there *is* an argument.



Richard Loosemore.






It's also true that just which forms fall into which category depends partially on what you are willing to acknowledge as human, but even taking the most conservative normal meaning of the term the above statements remain true.

OTOH, there are many events that we would not consider singularity, such as a strike by a giant meteor, that would also end the human race. So that is not a distinction of either the technological singularity or of AGI.

To me it appears that the best hope for the future is to work towards a positive singularity outcome. There are certain to be many working on projects that may result in a singularity without seriously considering whether it will or will not be positive. And others working towards a destructive singularity, but planning to control it. I may not think I have much chance of success, but I can at least be *trying* to yield a positive outcome. (Objectively, I rate my chances of success as minimal. I'm hoping to come up with an "intelligent" assistant that will have a mode of operation similar to Eliza [but with *much* deeper understanding, that's not asking for much] in the sense of being a conversationalist...someone that one can talk things over with. Totally loyal to the employer...but with a moral code. So far I haven't done very well, but if I am successful, perhaps I can decrease the percentage of sociopaths.)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=89271807-f5ddfa

Reply via email to