>From a promotional perspective these ideas seem quite weak.

It was an addition to other complex and relatively near future issues
e.g. the longevity and demographic related problems mentioned by
Minsky in his "emergency" presentation.
What are your suggestions?

>AI saving the world .. sounds crackpot

Because it's associated with many crap-filled AI stories, but there is
IMO nothing unrealistic about the general idea of AGI eventually
saving mankind from threats we would not be able to effectively deal
with without it. Just like you cannot outrun a car, you will not be a
better problem solver than a well designed AGI. Many lives could be
saved even in these days. People are dying every day because leaders
don't make as good decisions as they could considering the data they
have (or could get). We just don't see through our data very well and
right tools can make a huge difference.

Regards,
Jiri Jelinek

On Oct 31, 2007 4:19 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> From a promotional perspective these ideas seem quite weak.  To most
> people AI saving the world or destroying it just sounds crackpot (a
> cartoon caricature of technology), whereas "helping us to accomplish
> our goals" is too vague.
>
>
>
>
> On 31/10/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > > Because AI will save the world or destroy it?
> >
> > Because it can significantly help us to accomplish our goals -
> > whatever that is ATM. Destroying the Earth might be in our best
> > interest at some point in the future. But not now I guess :). Of
> > course depends on who will control the AGI, but powerful tools that
> > could be used to destroy our planet exist for some time now and we are
> > still here, so hopefully things will go well. And don't think that
> > those who are clever enough to develop powerful AGI are stupid enough
> > to not implement equally powerful safety features to support desired
> > compatibility between the goal system of those in charge vs. actions
> > the AGI could possibly take on its own. Hopefully, "those in charge"
> > will read the manual in the case that the system is not intuitive
> > enough in this respect ;-).. Sure sure, accidents happen, but
> > generally, it's IMO better to rather have powerful tools than not
> > have. If we are too stupid to live then we don't deserve to live.. IMO
> > fair enough.. Let's give it a shot :-)
> >
> > Regards,
> > Jiri Jelinek
> >
> > On Oct 30, 2007 6:09 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > >
> > > > I'll probably include a reference to the: Risks to civilization,
> > > > humans and planet Earth
> > > >
> > > http://en.wikipedia.org/wiki/Risks_to_civilization%2C_humans_and_planet_Earth
> > >
> > > Because AI will save the world or destroy it?
> > >
> > >
> > > -- Matt Mahoney, [EMAIL PROTECTED]
> > >
> > > -----
> > > This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&;
> > >
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59529676-ce9bd4

Reply via email to