> On 19 Feb 2018, at 04:32, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:
> 
> 
> On 2/18/2018 9:58 AM, agrays...@gmail.com <javascript:> wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com <> 
>> wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>> > >  
>> > > <https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>
>> > >  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>> 
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>> 
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological change. 
>> 
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>> 
>> Cheers 
>> 
>> One thing a computer can not do is ask a question. I can ask a question and 
>> program a computer to help solve the problem. In fact I am doing a program 
>> to do just this. I am working a computer program to model aspects of 
>> gravitational memory. What the computer will not do, at least computers we 
>> currently employ will not do is to ask the question and then work to solve 
>> it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>> 
>> LC 
>> 
>> You've hit the proverbial nail on the head. If a computer can't ask a 
>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>> completely a misnomer to refer to it as "intelligent".  AG
>>  
>> It has no imagination. I doesn't wonder about anything. It's not conscious 
>> and therefore should not be considered as having consciousness or 
>> intelligence. AG 
> 
> Are you aware that AlphaGo Zero won one it's games by making a move that 
> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
> Zero's victory.  So one has to ask, how do you know so much about its inner 
> thoughts so that you can assert it can't ask a question, can't propose a new 
> theory,  doesn't wonder, and is not conscious?
> 
> Brent
> 
> If you give it a task, just about any task within its universe of discourse, 
> it will perform it hugely better than humans. But where is the evidence it 
> can initiate any task without being instructed? AG 

In the mathematics of computer science, especially the theory of 
self-reference. All the G* minus G theory, given by the machines, can be 
considered as the machine’s natural questions which imposes themselves on the 
machines looking inward. 

But this requires doing a bit of computer science, if that was not obvious when 
we assume computationalism.

Bruno



> 
> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com <javascript:>.
>> To post to this group, send email to everyth...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/everything-list 
>> <https://groups.google.com/group/everything-list>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to