On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> You know, I'm struggling here to find a good reason to disagree with
> you, Russell.  Strange position to be in, but it had to happen
> eventually ;-).

"And when Richard Loosemore and Russell Wallace agreed with each
other, it was also a sign..." to snarf inspiration, if not an actual
quote, from one of my favorite authors ^.^

[snipped and agreed with...]

> What I think *would* be valid here are well-grounded discussions of the
> consequences of AGI...  but what "well-grounded" means is that the
> discussions have to be based on solid assumptions about what an AGI
> would actually be like, or how it would behave, and not on wild flights
> of fancy.

I agree with that too, I just think we're a long way from having real
data to base such discussions on, which means if held at the moment
they'll inevitably be based on wild flights of fancy.

If we get to the point of having something that shows a reasonable
resemblance to a self-willed human-equivalent AGI, even a baby one - I
don't think this is going to happen anytime in the near future, but
I'd be happy to be proven wrong - then we'd have some sort of real
data, and there might be a realistic prospect of well-grounded
discussion of the consequences.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48223085-b60b76

Reply via email to