It's just a question of time. The FOOM scenario is motivated by safety
concerns - that AI's intelligence could surpass our ability to deal with
it, leading to the Singularity. So it's not about whether those other paths
are possible, it's about how long they would take, and in each of those
cases, would the AIs involved be safe.

It's hard to know how fast these different paths would take. In general
though, it's much easier to see FOOM happening by considering an AI
analyzing its own cognitive apparatus and updating it directly, according
to some theory or model of intelligence it has developed, than by either
evolution or by starting from scratch. In the case of evolution, the AI
would have to run a bunch of iterations, each of which would take some
amount of time. How many iterations, how much time?  I'm out of my depth on
that. My hunch is that this would be a slow process, even with a lot of
computational resources. Also, it bears pointing out that the evolution
path is much less safe from the standpoint of being able to reason about
whether the AIs created would value human life/flourishing.

In the case of an AI building its own new AI, that's actually the same
basic scenario as an AI just modifying its own source code. In both cases
it's instantiating a design based on its own theory of intelligence.
Starting from scratch is slower, because with recursive self-improvement,
it's got a huge head start - it's starting from a model that is more or
less proven to be intelligent already.  But it's not hard to imagine that a
recursively-improving AI would finally arrive at a point where it realized
the only way to continuing increasing intelligence would be to create a new
design, one that would be impossible for human level intelligence to ever
grasp. From a safety point of view, both of these paths at least have the
possibility of being able to reason about whether the AIs preserve a goal
system in which human life/flourishing is valued.

On Fri, Jul 12, 2019, 4:28 AM Quentin Anciaux <[email protected]> wrote:

> Hi,
>
> Is it not how evolution is working ? By iteration and random modification,
> new better organisms come to existence ?
>
> Why AI could not use iterating evolution to make better and better AI ?
>
> Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have
> built a better, smarter version of us ? The AI surely would be able to
> build another one and by iterating, a better one.
>
> What's wrong with this ?
>
> Quentin
>
> Le ven. 12 juil. 2019 à 06:28, Terren Suydam <[email protected]> a
> écrit :
>
>> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own
>> source code, gets smarter, and with the increase in intelligence, is able
>> to make yet more modifications to its own source code, and so on, until its
>> intelligence far outstrips its previous capabilities before the recursive
>> self-improvement began. It's hypothesized that such a process could take an
>> astonishingly short amount of time, thus "FOOM". See
>> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.
>>
>> My point was that the inherent limitation of a mind to understand itself
>> completely, makes the FOOM scenario less likely. An AI would be forced to
>> model its own cognitive apparatus in a necessarily incomplete way. It might
>> still be possible to improve itself using these incomplete models, but
>> there would always be some uncertainty.
>>
>> Another more minor objection is that the FOOM scenario also selects for
>> AIs that become massively competent at self-improvement, but it's not clear
>> whether this selected-for intelligence is merely a narrow competence, or
>> translates generally to other domains of interest.
>>
>>
>> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
>> [email protected]> wrote:
>>
>>> Advances in intelligence can just be gaining more factual knowledge,
>>> knowing more mathematics, using faster algorithms, etc.  None of that is
>>> barred by not being able to model oneself.
>>>
>>> Brent
>>>
>>> On 7/11/2019 11:41 AM, Terren Suydam wrote:
>>> > Similarly, one can never completely understand one's own mind, for it
>>> > would take a bigger mind than one has to do so. This, I believe, is
>>> > the best argument against the runaway-intelligence scenarios in which
>>> > sufficiently advanced AIs recursively improve their own code to
>>> > achieve ever increasing advances in intelligence.
>>> >
>>> > Terren
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com
>> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> All those moments will be lost in time, like tears in rain. (Roy
> Batty/Rutger Hauer)
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA8MEbRpAhvTXPqt%3D29%2B1ct8kXTFE%2BFzueA34D5%2B83yKPA%40mail.gmail.com.

Reply via email to