When presenting reasons for developing IGI to the general public one should
refer to a list of
problems that are generally insoluble with current computational technology.

Global weather modelling and technology to predict very long term effects of
energy expended to modify climate so that a least energy model can be
benchtested.
Integration of sociopolitical factors  into a global evolution predictive
model will require something the best
economists, scientists, military strategists will have to get right or risk
global social anarchy.
Human directed Terraforming might also require the establishment of stable
self-sustaining colonies on the moon and mars and perhaps a jovian moon as a
backup measure......  Just in case we miscalculate and
accidentally self-destruct the home world


The replacing of aging with Self-directed personal evolution plans which are
implemented over time framesw of hundreds to thousands of years  will most
definitely find AGI an essential supporting technology.

Pure AGI discussions may not like the distractions of these off-topic
themes, but any Successful AGI will have to designed to be capable and
willing to operate within the real world.

These are 2 areas where singularity  driven technology is not just useful
but essential.

Morris





On 9/30/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
>
> On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > You know, I'm struggling here to find a good reason to disagree with
> > you, Russell.  Strange position to be in, but it had to happen
> > eventually ;-).
>
> "And when Richard Loosemore and Russell Wallace agreed with each
> other, it was also a sign..." to snarf inspiration, if not an actual
> quote, from one of my favorite authors ^.^
>
> [snipped and agreed with...]
>
> > What I think *would* be valid here are well-grounded discussions of the
> > consequences of AGI...  but what "well-grounded" means is that the
> > discussions have to be based on solid assumptions about what an AGI
> > would actually be like, or how it would behave, and not on wild flights
> > of fancy.
>
> I agree with that too, I just think we're a long way from having real
> data to base such discussions on, which means if held at the moment
> they'll inevitably be based on wild flights of fancy.
>
> If we get to the point of having something that shows a reasonable
> resemblance to a self-willed human-equivalent AGI, even a baby one - I
> don't think this is going to happen anytime in the near future, but
> I'd be happy to be proven wrong - then we'd have some sort of real
> data, and there might be a realistic prospect of well-grounded
> discussion of the consequences.
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48290820-7d2775

Reply via email to