Le ven. 12 juil. 2019 à 11:53, Philip Thrift <[email protected]> a
écrit :

>
>
> AI researchers have been using *genetic algorithms* and *artificial life*
> to "evolve" AI programs since the 1970s.
>
> @philipthrift
>
>
I know, that's why I'm asking  Terren about his position...


>
>
> On Friday, July 12, 2019 at 3:28:59 AM UTC-5, Quentin Anciaux wrote:
>>
>> Hi,
>>
>> Is it not how evolution is working ? By iteration and random
>> modification, new better organisms come to existence ?
>>
>> Why AI could not use iterating evolution to make better and better AI ?
>>
>> Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have
>> built a better, smarter version of us ? The AI surely would be able to
>> build another one and by iterating, a better one.
>>
>> What's wrong with this ?
>>
>> Quentin
>>
>> Le ven. 12 juil. 2019 à 06:28, Terren Suydam <[email protected]> a
>> écrit :
>>
>>> Sure, but that's not the "FOOM" scenario, in which an AI modifies its
>>> own source code, gets smarter, and with the increase in intelligence, is
>>> able to make yet more modifications to its own source code, and so on,
>>> until its intelligence far outstrips its previous capabilities before the
>>> recursive self-improvement began. It's hypothesized that such a process
>>> could take an astonishingly short amount of time, thus "FOOM". See
>>> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.
>>>
>>> My point was that the inherent limitation of a mind to understand itself
>>> completely, makes the FOOM scenario less likely. An AI would be forced to
>>> model its own cognitive apparatus in a necessarily incomplete way. It might
>>> still be possible to improve itself using these incomplete models, but
>>> there would always be some uncertainty.
>>>
>>> Another more minor objection is that the FOOM scenario also selects for
>>> AIs that become massively competent at self-improvement, but it's not clear
>>> whether this selected-for intelligence is merely a narrow competence, or
>>> translates generally to other domains of interest.
>>>
>>>
>>> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
>>> [email protected]> wrote:
>>>
>>>> Advances in intelligence can just be gaining more factual knowledge,
>>>> knowing more mathematics, using faster algorithms, etc.  None of that
>>>> is
>>>> barred by not being able to model oneself.
>>>>
>>>> Brent
>>>>
>>>> On 7/11/2019 11:41 AM, Terren Suydam wrote:
>>>> > Similarly, one can never completely understand one's own mind, for it
>>>> > would take a bigger mind than one has to do so. This, I believe, is
>>>> > the best argument against the runaway-intelligence scenarios in which
>>>> > sufficiently advanced AIs recursively improve their own code to
>>>> > achieve ever increasing advances in intelligence.
>>>> >
>>>> > Terren
>>>>
>>>>
>>>> --
>>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/58fcc534-b708-4ded-a8da-75c3e9d923ff%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/58fcc534-b708-4ded-a8da-75c3e9d923ff%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>


-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMW2kArc7p3ryxio3HR%2BPpr7ibPTNRjjHd0wU7Q_w5ZNejhWcQ%40mail.gmail.com.

Reply via email to