I hope I don't miss represent him but I agree with Ben (at
least my interpretation) when he said, "We can ask it questions like, 'how
can we make a better A(G)I that can serve us in more different ways without
becoming dangerous'...It can help guide us along the path to a
positive singularity." I'm pretty sure he was also saying at first it
should just be a question answering machine with a reliable goal system and
stop the development if it has an unstable one before it gets to smart. I
like the idea that we should create an automated
cross disciplinary scientist and engineer (if you even separate the two) and
that NLP not modeled after the human brain is the best proposal for
a benevolent and resourceful super intelligence that enables a positive
singularity and all its unforeseen perks.
On Wed, Jun 23, 2010 at 11:04 PM, The Wizard <key.unive...@gmail.com> wrote:


> If you could ask an AGI anything, what would you ask it?
> --
> Carlos A Mejia
>
> Taking life one singularity at a time.
> www.Transalchemy.com
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to