If we've learned anything from science, it is that opposing sides often turn 
out to belong to the same coin. Heisenberg and Einstein spring to mind. So does 
classical science and quantum mechanics, eventually joining hands in M-theory, 
and in doing so, allowing Newton to come sit among the stars. Whatever the 
factors for decoherence might be, we should be reminded that we only observe 
what seems apparent to us. Thus, paradigms do shift and convergence remains a 
fact of technology. How about a quantum leap then?

In such a spirit of inclusivity then, let those who have to perform the 
"classical" Turing Test do so for its progressive purposes, but at the same 
time, why not design an appropriate AGI test that already assumes that a single 
coin with different sides already existed from the outset?  

Sorry, but even if I understand the point of it all, I see no real AGI future 
in allowing machines to battle it out, with a human being as the arbitrator to 
decide the suitability of the questions. It is reminiscent of robot wars.  

>From my understanding that is not how the visual test is supposed to be 
>conducted. The machine is supposed to identify rich meaning as defined by 
>humans, i.e., from a photographic context presented to it, e.g., 2 persons 
>walking in a busy, city street holding hands and apparently talking. It is 
>only if their hands were obscured in the photograph by their bodies, and the 
>question was asked: "What are they carrying?" that the arbitrator would rule" 
>"Fowl!" in favour of the machine. I would probably vote the same in favour of 
>a human participant too. But still, would the machine fail the imitation test 
>if it answered simply: "I don't know."? 

I'm concerned that at the rate the "classical" test is progressing, we'll even 
be missing Turing's imagined deadline. Society has already achieved another 
prediction of Turing's, which was that one would be able to talk about machine 
intelligence in public without encountering, general denial of such a 
possibility. In other words, society probably is ready now for pure AGI.   

My contention is that the world needs AGI machines that could make a 
constructive difference at large scale and rather sooner than later. For 
example, how about independently changing saline water into drinking water 
(with added test parameters for intelligence off course)? A useful test. Or 
perhaps, using adaptive, electromagnetic frequency to successfully treat a 
basic set of machine-diagnosed illnesses with? Another useful test. Less 
subjective too.  

Such tests, if successful, should prove the machine as a new species of 
machine, as pure AGI, not an attempt at replicating human functionality alone.  

Let's apply Toffler's principle of; Technology feeding upon itself. Thus, let 
the Turing-Test bar for AGI be set in such a manner as to allow quantum-based 
technology to have a 7-course meal. 

Perhaps, as a suggestion, as a start, why not just get on with establishing an 
adapted philosophical basis for a futuristic Turing Test? Why not start by 
extending the notion of Turing's "imitate".  

Suppose we agreed Turing's non-classical version of  "imitate" to mean; AGI 
imitation at a quantum level, in order to solve problems by learning from 
environment, matter, biology and humans and be applied to environment, matter, 
biology and humans in an adaptive manner, as an independent, non-human-cell 
entity, as a purely intelligent machine, to the objective satisfaction of 
humans? 

Other than designing and emerging such a machine, remove all human control - 
and thus contest - from the actual test. 

The fundamental difference to the "classical" test would be the inherent 
assumptions that:
1) Machines could be AGI enabled;
2) AGI machines are not in a contest with what makes humans human.    

Set the bar so high the following generations of scientists and hackers and 
informed laymen and women may dream. 

Rob      

Date: Fri, 13 Mar 2015 10:14:50 -0700
Subject: Re: [agi] visual turing test
From: a...@listbox.com
To: a...@listbox.com

The point of this is just that it's just upping the bar for machine learning 
contests, because the last challenges have been met.  They are taking the 
previous narrow AI challenge and getting a little bolder, broadening it a bit 
to something closer to AGI but still potentially achievable by narrow AI 
methods.
On Fri, Mar 13, 2015 at 7:30 AM, Benjamin Kapp via AGI <a...@listbox.com> wrote:
This isn't a turing test because you aren't playing the imitation game.  The 
computer isn't trying to convince a human it is a human.  The test has a 
computer asking the questions to another computer.  The humans role is to 
simply classify questions as "unanswerable" or not.
The paper is just a piece which seeks to exploit the prestige of the 
researchers universities to raise the bar for image recognition in the field of 
image recognition NOT AGI.  
An AGI test would NOT say make a better domain specific algorithm.  An AGI test 
would ask for a cross domain algorithm.  E.g. create a program that can both 
beat someone at chess and write a poem using the same general purpose 
algorithm(s).  
Does this make sense?
On Fri, Mar 13, 2015 at 2:34 AM, Nanograte Knowledge Technologies via AGI 
<a...@listbox.com> wrote:



65 years on, and we're still trying to prove the unprovable A<->B. Turing never 
suggested a visual test. This proposal is merely an interpretation of the 
original Turing Test. It promotes the development of human intelligence, not 
machine intelligence. Therefore, my contention would be that such a test be 
viewed as valid in its intent, but not reliable in its AGI philosophy. 

Seems, we're still missing the point of the original test, which is also 
referred to as a philosophical underpining of AI. Is there any other AGI 
philosophy? Any philosophical extensions to Turing? If not, we'll probably 
remain stuck there then.  

<I think this is what Ben's been on about for ages now.> 

For AGI purposes, let's then rather revisit the philosophy of AI, as offered by 
Turing, and extend it into the now-emerging future. 

What would such a test look like in quantum-mechanical terms? 
I think, that would be appropriate for raising the AGI bar.

Rob

> Date: Thu, 12 Mar 2015 11:35:25 -0700
> Subject: Re: [agi] visual turing test
> From: a...@listbox.com
> To: a...@listbox.com
> 
> I skimmed over the article.  It sounds like it pretty much IS a Turing
> test.  They are just asking more detailed questions about what is in a
> picture to check to see if the machine "understands."   Their
> motivation is apparently that the visual testing is inadequate.
> (Maybe I missed something)
> 
> On 3/12/15, Nanograte Knowledge Technologies via AGI <a...@listbox.com> wrote:
> > It does not matter how sophisticated the test is. Until we turn Turing on
> > it's head, the test would still return a value of 1. The notion that a
> > machine = human is outdated. Why try and prove it? Therefore, the "machine"
> > has become but a catalyst for human development. I still contend that Turing
> > had a different message for the world and that we may be missing it.
> >
> > From: a...@listbox.com
> > To: a...@listbox.com
> > Subject: [agi] visual turing test
> > Date: Wed, 11 Mar 2015 19:34:34 +0100
> >
> > http://machineslikeus.com/news/researchers-develop-visual-turing-test
> >
> >
> >
> >
> >
> >       AGI | Archives
> >
> >  | Modify
> >  Your Subscription
> >
> >
> >
> >
> >
> >
> >                                     
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> > Modify Your Subscription:
> > https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to