On 7/12/2019 1:28 AM, Quentin Anciaux wrote:
Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

It's not wrong, but in natural evolution "better" just means more surviving progeny.  So what's "better" is essentially defined by the environment, i.e. natural selection.  If an AI uses interative evolution, what is the environment that will define "better"?  It may not be what we think is better.

Brent


Quentin

Le ven. 12 juil. 2019 à 06:28, Terren Suydam <terren.suy...@gmail.com <mailto:terren.suy...@gmail.com>> a écrit :

    Sure, but that's not the "FOOM" scenario, in which an AI modifies
    its own source code, gets smarter, and with the increase in
    intelligence, is able to make yet more modifications to its own
    source code, and so on, until its intelligence far outstrips its
    previous capabilities before the recursive self-improvement began.
    It's hypothesized that such a process could take an astonishingly
    short amount of time, thus "FOOM". See
    https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.

    My point was that the inherent limitation of a mind to understand
    itself completely, makes the FOOM scenario less likely. An AI
    would be forced to model its own cognitive apparatus in a
    necessarily incomplete way. It might still be possible to improve
    itself using these incomplete models, but there would always be
    some uncertainty.

    Another more minor objection is that the FOOM scenario also
    selects for AIs that become massively competent at
    self-improvement, but it's not clear whether this selected-for
    intelligence is merely a narrow competence, or translates
    generally to other domains of interest.


    On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List
    <everything-list@googlegroups.com
    <mailto:everything-list@googlegroups.com>> wrote:

        Advances in intelligence can just be gaining more factual
        knowledge,
        knowing more mathematics, using faster algorithms, etc. None
        of that is
        barred by not being able to model oneself.

        Brent

        On 7/11/2019 11:41 AM, Terren Suydam wrote:
        > Similarly, one can never completely understand one's own
        mind, for it
        > would take a bigger mind than one has to do so. This, I
        believe, is
        > the best argument against the runaway-intelligence scenarios
        in which
        > sufficiently advanced AIs recursively improve their own code to
        > achieve ever increasing advances in intelligence.
        >
        > Terren


-- You received this message because you are subscribed to the
        Google Groups "Everything List" group.
        To unsubscribe from this group and stop receiving emails from
        it, send an email to
        everything-list+unsubscr...@googlegroups.com
        <mailto:everything-list%2bunsubscr...@googlegroups.com>.
        To view this discussion on the web visit
        
https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net.

-- You received this message because you are subscribed to the Google
    Groups "Everything List" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to everything-list+unsubscr...@googlegroups.com
    <mailto:everything-list+unsubscr...@googlegroups.com>.
    To view this discussion on the web visit
    
https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com
    
<https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com?utm_medium=email&utm_source=footer>.



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)
--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com <mailto:everything-list+unsubscr...@googlegroups.com>. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com <https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/eeadee92-ef95-7ffb-0145-34cd0454a048%40verizon.net.

Reply via email to