On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam <[EMAIL PROTECTED]> wrote: > > I'm asserting that if you had an FAI in the sense you've described, it > wouldn't > be possible in principle to distinguish it with 100% confidence from a rogue > AI. > There's no "Turing Test for Friendliness". >
You design it to be Friendly, you don't generate an arbitrary AI and then test it. The latter, if not outright fatal, might indeed prove impossible as you suggest, which is why there is little to be gained from AI-boxes. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com