> On 12 Jul 2019, at 20:19, 'Brent Meeker' via Everything List > <[email protected]> wrote: > > > > On 7/12/2019 1:28 AM, Quentin Anciaux wrote: >> Hi, >> >> Is it not how evolution is working ? By iteration and random modification, >> new better organisms come to existence ? >> >> Why AI could not use iterating evolution to make better and better AI ? >> >> Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have >> built a better, smarter version of us ? The AI surely would be able to build >> another one and by iterating, a better one. >> >> What's wrong with this ? > > It's not wrong, but in natural evolution "better" just means more surviving > progeny.
I would say “better surviving”, or just “surviving". Humans progeny is ridiculously low in number compared to bacteria. > So what's "better" is essentially defined by the environment, i.e. natural > selection. If an AI uses interative evolution, what is the environment that > will define "better"? It may not be what we think is better. It is only “better" in the sense of “surviving" instead of “disappearing". Small creatures can survive thanks to big progeny numbers, despite most die quickly (but then feed others), or in term of more efficacious care to the progeny, or something else. With the histories in arithmetic, there is some sense where the relative “progeny measure” plays an a posteriori role, which is needed to stabilise consciousness and avoid the too many white rabbits, though. Bruno > > Brent > >> >> Quentin >> >> Le ven. 12 juil. 2019 à 06:28, Terren Suydam <[email protected] >> <mailto:[email protected]>> a écrit : >> Sure, but that's not the "FOOM" scenario, in which an AI modifies its own >> source code, gets smarter, and with the increase in intelligence, is able to >> make yet more modifications to its own source code, and so on, until its >> intelligence far outstrips its previous capabilities before the recursive >> self-improvement began. It's hypothesized that such a process >> could take an astonishingly short amount of time, thus "FOOM". See >> https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff >> <https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff> for more. >> >> My point was that the inherent limitation of a mind to understand itself >> completely, makes the FOOM scenario less likely. An AI would be forced to >> model its own cognitive apparatus in a necessarily incomplete way. It might >> still be possible to improve itself using these incomplete models, but there >> would always be some uncertainty. >> >> Another more minor objection is that the FOOM scenario also selects for AIs >> that become massively competent at self-improvement, but it's not clear >> whether this selected-for intelligence is merely a narrow competence, or >> translates generally to other domains of interest. >> >> >> On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List >> <[email protected] <mailto:[email protected]>> >> wrote: >> Advances in intelligence can just be gaining more factual knowledge, >> knowing more mathematics, using faster algorithms, etc. None of that is >> barred by not being able to model oneself. >> >> Brent >> >> On 7/11/2019 11:41 AM, Terren Suydam wrote: >> > Similarly, one can never completely understand one's own mind, for it >> > would take a bigger mind than one has to do so. This, I believe, is >> > the best argument against the runaway-intelligence scenarios in which >> > sufficiently advanced AIs recursively improve their own code to >> > achieve ever increasing advances in intelligence. >> > >> > Terren >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Everything List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] >> <mailto:everything-list%[email protected]>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net >> >> <https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net>. >> -- >> You received this message because you are subscribed to the Google Groups >> "Everything List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] >> <mailto:[email protected]>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com >> >> <https://groups.google.com/d/msgid/everything-list/CAMy3ZA9xK%3DibZqo%3DxQcqSVZXjTu3pnAiTvRLF_8-LHVRth8F_w%40mail.gmail.com?utm_medium=email&utm_source=footer>. >> >> >> -- >> All those moments will be lost in time, like tears in rain. (Roy >> Batty/Rutger Hauer) >> -- >> You received this message because you are subscribed to the Google Groups >> "Everything List" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] >> <mailto:[email protected]>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com >> >> <https://groups.google.com/d/msgid/everything-list/CAMW2kAoZrj4nXJ_EFCCSupSOWP_ows52ECR3w3zLBrNg8UDsyg%40mail.gmail.com?utm_medium=email&utm_source=footer>. > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] > <mailto:[email protected]>. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/eeadee92-ef95-7ffb-0145-34cd0454a048%40verizon.net > > <https://groups.google.com/d/msgid/everything-list/eeadee92-ef95-7ffb-0145-34cd0454a048%40verizon.net?utm_medium=email&utm_source=footer>. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/9008E602-911D-4400-81EE-E3C77D2A32B7%40ulb.ac.be.

