> On 12 Jul 2019, at 06:28, Terren Suydam <terren.suy...@gmail.com> wrote:
> 
> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own 
> source code, gets smarter, and with the increase in intelligence, is able to 
> make yet more modifications to its own source code, and so on, until its 
> intelligence far outstrips its previous capabilities before the recursive 
> self-improvement began. It's hypothesized that such a process could take an 
> astonishingly short amount of time, thus "FOOM". See 
> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff 
> <https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff> for more.
> 
> My point was that the inherent limitation of a mind to understand itself 
> completely, makes the FOOM scenario less likely. 


I would say that the appearance of sex was a form scenario, entailing the 
existence of brains, language, computer, learners etc. seems to be a FOOM 
scenario. We live it right now. In geological time, life is like an explosion, 
albeit a creative one.




> An AI would be forced to model its own cognitive apparatus in a necessarily 
> incomplete way.

That is the eternal motor. Each time the terrestrial G grans some G* truth, it 
transforms itself, and is back to the same limitation, but with (infinitely) 
more power.

The Löbian universal machine are forever never completely satisfied … The 
computationalist Löbian universal machine knows why.



> It might still be possible to improve itself using these incomplete models, 
> but there would always be some uncertainty. 

That is right, but it is always the same, and confidence can grow on the 
simple, and modesty and caution can grow on the complex. Eventually the wise 
stay mute.




>  
> 
> Another more minor objection is that the FOOM scenario also selects for AIs 
> that become massively competent at self-improvement, but it's not clear 
> whether this selected-for intelligence is merely a narrow competence, or 
> translates generally to other domains of interest.

Universality allows all competences to grow, but it is a partial order with 
many incomparable degrees, some path can preserve the starting infinite 
intelligence of the machines, but many, if not most, path can lead to growing 
stupidities, and an attachment to the lies, especially the lies to oneself. But 
that regulates itself in practice. We can reduce the harm, and learn modesty 
and caution, it is the price of keep liberty/universality, without abandoning 
completely security.

The oscillation between Security (total computable, controllable) and Liberty ( 
universality thus only partially computable and only partially controllable) 
accompany life, but not necessarily consciousness (open problem).

Bruno



> 
> 
> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List 
> <everything-list@googlegroups.com <mailto:everything-list@googlegroups.com>> 
> wrote:
> Advances in intelligence can just be gaining more factual knowledge, 
> knowing more mathematics, using faster algorithms, etc.  None of that is 
> barred by not being able to model oneself.
> 
> Brent
> 
> On 7/11/2019 11:41 AM, Terren Suydam wrote:
> > Similarly, one can never completely understand one's own mind, for it 
> > would take a bigger mind than one has to do so. This, I believe, is 
> > the best argument against the runaway-intelligence scenarios in which 
> > sufficiently advanced AIs recursively improve their own code to 
> > achieve ever increasing advances in intelligence.
> >
> > Terren
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list%2bunsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net
>  
> <https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net>.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5BE0B32A-4F00-4722-976B-0524548822B4%40ulb.ac.be.

Reply via email to