Riiight.  Doesn't sound like you were reading with a very open mind.
 "Re-hash" is a pretty negative term, but can you point to a single work of
literature that doesn't use pre-existing ideas?

So you want a book in which all the AI researchers are wonderful human
beings, the UN is a wholly benign and competent organisation, and there are
no risks involved in AGI development?  Good luck in finding someone who is
willing and able to write that without it being deathly boring, and coming
across as a blatant piece of propaganda.  Surely you are aware that stories
require jeopardy, and a protagonist who goes on a journey which involves
danger, setbacks and resolution?

Look, I understand that you might feel embattled as a community.
Personally I think it is going to get worse before it gets better.  I am
excited about the exponential changes that will unfold in the rest of my
life, as I've enjoyed the ones that have unfolded in the 55 years I've seen
so far.  I think we need new stories to convey that, but Polyanna won't
play.

However, it seems I'm out of synch.  I'll get my coat...

On 19 March 2015 at 01:34, Logan Streondj via AGI <[email protected]> wrote:

> On Tue, Mar 17, 2015 at 04:43:41PM +0100, Calum Chace wrote:
> > Hi Logan
> >
> > If you like I'll send you a mobi or epub file of my book, and you can see
> > if you think it is helpful or not.
> >
> > Regards
> > Calum
>
> Hey,
> so thanks for the epub, I took a look at all references to AI and AGI,
> as well as read Matt the AGI's interview and demise.
>
> for those that haven't read it, I'll summarize the relevant points,
> as depicted in the book:
> * uploading human mind into a box is safest form of AGI
> * destructive uploading via fine slicing and scanning is used
> * rogue researcher goes around murdering people for experiments.
> * uploading is "good" because it saves people from death
> * non-human AGI's are vaguely more dangerous
> * hard take off is considered normal
> * UN bans all AGI research
>
> from what I read, a lot of it is a rehash of current fears related to
> AI and AGI as covered in mass media.  while benefits may have been
> alluded to, the only described one was the post-mortem uploading.
>
> otherwise most of it is humans interacting with each other, and then
> later on with Matt, who gets a backup, but then when AGI's are banned,
> botches his own escape attempt, or gets "extracted" by some people
> claiming to be running the earth simulation he was in.
>
> in terms of "is it helpful?",
> * the UN banning AGI is certainly a bad example.
> * saying non-human AGI is dangerous doesn't help.
> * the copious amounts of murder commited by the researchers,
>   well obviously only increases the fear of AGI researchers.
>   btw simulating brains has been the least useful AGI path so far.
>
> perhsonally I think perhaps a better question might be,
> "what would be helpful?"
> * UN organizing AGI research, for greater world unity.
> * promoting that technological beings (non-human AI) and biological
>   beings (humans) can continue to work together,
>   even with reversed intelligence disparity.
> * showing that AGI developers are friendly folk that want to help,
>   are open to contributions and publically share their work,
>   I'm a vegan btw, am gentle with animals, plants and tech alike.
>
>
> from Logan ya
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/26879140-5b8435c3
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Regards

Calum



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to