I think you are just stringing words together. If I'm mistaken perhaps you can express yourself without using obscure terminology from various domains and metaphorically at that. Why not just say what you mean. The objective of AGI is hard enough as it is. We don't need to make it harder by talking in a way were the only person who understands what we are saying is ourselves, that sort of defeats the purpose of talking.
On Fri, Mar 6, 2015 at 1:18 PM, Nanograte Knowledge Technologies via AGI < [email protected]> wrote: > In my view.... > > But the Turing Test already has such a test inherent in it, as antithesis. > That is the point. Logically, if a human proves that it is a machine, then > one proves one's own humanity. If one proves that it isn't, then one proves > the same thing. The test always returns a value of 1. It's all relatively > reasonable and perfectly irrelevant at the same time. Isn't that what > Turing was trying to make the world aware of? The test is a form of a > quantum-entangled superposition, is it not? It only tests for quantum > humanness. The machine is but a stimulus for the test to be conducted. > > ------------------------------ > Date: Fri, 6 Mar 2015 12:57:29 -0500 > Subject: Re: [agi] Reverse Turing Test > From: [email protected] > To: [email protected] > > > Could you explain the rules? > > > On Fri, Mar 6, 2015 at 12:54 PM, Steve Richfield via AGI <[email protected]> > wrote: > > It seems obvious (to) me that any envisioned super duper AGI of the future > would be easily able to win a reverse Turing competition - demonstrating > with advanced logical solutions to difficult problems that it is a machine > and NOT merely human. > > To see how such an AGI might function, and how its responses might be > perceived by mere humans, it seems (to me) VERY interesting to see what > might come from such a competition, even though (for now) it only includes > teams of mere humans. > > I suspect that heidenbugs (correct functionality that is seen to be > erroneous) and incorrectly perceived sinister intent would make it nearly > impossible for mere humans to accurately judge such a competition. If so, > this would seem to doom the future utility of AGIs. > > As with Winograd schemas, the test is in the doing. Every your or so I > post looking for others interesting in operating and/or participating in a > reverse Turing competition. > > Any interest? > > Steve > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/3701026-786a0853> | Modify > <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > > > > > -- > -- Matt Mahoney, [email protected] > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc> | > Modify <https://www.listbox.com/member/?&> Your Subscription > <http://www.listbox.com> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
