> On 12 Jul 2019, at 10:28, Quentin Anciaux <[email protected]> wrote:
> 
> Hi,
> 
> Is it not how evolution is working ? By iteration and random modification, 
> new better organisms come to existence ?
> 
> Why AI could not use iterating evolution to make better and better AI ?
> 
> Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have 
> built a better, smarter version of us ? The AI surely would be able to build 
> another one and by iterating, a better one.
> 
> What's wrong with this ?

That has been studied for long by logicians. We can make a machine reflecting 
itself, syntactically and/or dynamically, and it will climb on the transfinite, 
either limited to the constructive one, or not (making impossible to prove the 
consistency of the machine above some ordinals).

An excellent book is the book by Torkel Franzen “Inexhaustibility”.

You can see my paper for the construction of digital machine or programs 
reducing themselves (like amoeba), or capable of regerentation from any of 
their subprograms, or part (like Planaria), and you can make program dreaming 
integrally themselves on different inputs, even dovetailing on them.

What cannot be done is 

- a stopping program giving its entire trace/history at his running 
(substitution) level.
- a program capable of proving it has a model (equivalently: capable of proving 
its consistency, at his “chatty” level = substitution level + classical logic)
- a program capable of defining its own semantic (although it can define a 
sequence of better apporixamtion, including jumps).

There are many papers in the mathematical logic and theoretical computer 
science literature on this. 

Intelligence is there at the start. Machine can develop variate form of 
competence, but those have a negative feedback on Intelligence. 

I admit that my definition of Intelligence is very large (anything not stupid, 
and something is stupid if it believes in its own intelligence, or in its own 
stupidity). Consistency becomes a model of Intelligence, showing its … 
consistency.

All protagorean virtues (that can be taught only by exemples and metaphorical 
narratives) obey that theory, and it can be useful to avoid the “theological 
trap”.

The more we have neurons, the more we are prone to the lies.

Bruno



> 
> Quentin
> 
> Le ven. 12 juil. 2019 à 06:28, Terren Suydam <[email protected] 
> <mailto:[email protected]>> a écrit :
> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own 
> source code, gets smarter, and with the increase in intelligence, is able to 
> make yet more modifications to its own source code, and so on, until its 
> intelligence far outstrips its previous capabilities before the recursive 
> self-improvement began. It's hypothesized that such a process could take an 
> astonishingly short amount of time, thus "FOOM". See 
> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff 
> <https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff> for more.
> 
> My point was that the inherent limitation of a mind to understand itself 
> completely, makes the FOOM scenario less likely. An AI would be forced to 
> model its own cognitive apparatus in a necessarily incomplete way. It might 
> still be possible to improve itself using these incomplete models, but there 
> would always be some uncertainty.  
> 
> Another more minor objection is that the FOOM scenario also selects for AIs 
> that become massively competent at self-improvement, but it's not clear 
> whether this selected-for intelligence is merely a narrow competence, or 
> translates generally to other domains of interest.
> 
> 
> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List 
> <[email protected] <mailto:[email protected]>> 
> wrote:
> Advances in intelligence can just be gaining more factual knowledge, 
> knowing more mathematics, using faster algorithms, etc.  None of that is 
> barred by not being able to model oneself.
> 
> Brent
> 
> On 7/11/2019 11:41 AM, Terren Suydam wrote:
> > Similarly, one can never completely understand one's own mind, for it 
> > would take a bigger mind than one has to do so. This, I believe, is 
> > the best argument against the runaway-intelligence scenarios in which 
> > sufficiently advanced AIs recursively improve their own code to 
> > achieve ever increasing advances in intelligence.
> >
> > Terren
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:everything-list%[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net
>  
> <https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net>.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> 
> 
> -- 
> All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger 
> Hauer)
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8DB0DFC3-829E-4B10-9EDB-81255DE5494A%40ulb.ac.be.

Reply via email to