I don't believe there will be an intelligence explosion or singularity either. But not because of NFL. We know that a universal learning algorithm like AIXI is impossible because Kolmogorov complexity is not computable. We know that good learners are necessarily complex. If you have a simple, universal bit predictor, then I have a simple sequence you can't predict. My program runs a copy of your predictor and outputs the opposite bit.
But that's not the reason. It's because intelligence depends on knowledge and computing power. A self improving program gains neither. An AI that can acquire atoms and energy for computation and do experiments can self improve, but it is not just a matter of exceeding human intelligence. Intelligence is not a point on a line, so we can't say when that happens. Computers are already a billion times smarter than humans in math and memory tests. The point we should worry about is when global computing capacity exceeds the biosphere in the next century. Meanwhile, the future belongs to those who are still having children. On Fri, Sep 8, 2023, 1:35 PM Danko Nikolic <[email protected]> wrote: > Hi Matt, > > I am not sure that your interpretation of the NFL-theorem is the best one. > What I wanted to say is that, most likely no explosion in intelligence is > possible. NFL-theorem tells us why. > > If this is correct, then we should not worry about AI taking over. That is > all I wanted to say. > > Danko > > Dr. Danko Nikolić > www.danko-nikolic.com > https://www.linkedin.com/in/danko-nikolic/ > -- I wonder, how is the brain able to generate insight? -- > > > On Fri, Sep 8, 2023 at 7:17 PM Matt Mahoney <[email protected]> > wrote: > >> Pardon my last empty response. >> >> On Fri, Sep 8, 2023, 12:16 AM Danko Nikolic <[email protected]> >> wrote: >> >>> Hi Matt, >>> >>> If the no-free-lunch theory applies to AGI, then we are good. >>> >>> Danko >>> >> >> The no free lunch theorem can be interpreted to say that learning is >> impossible because all theories are equally likely. That is wrong. What we >> use in practice (because it works) is Occam's Razor. Theories with shorter >> description lengths are more likely. No other probability distribution over >> a countably infinite set is possible. >> >> Dr. Danko Nikolić >>> www.danko-nikolic.com >>> https://www.linkedin.com/in/danko-nikolic/ >>> >> >> Neural networks are adaptive, right? >> >> -- I wonder, how is the brain able to generate insight? -- >> >> Insight is what a search algorithm feels like. It is part of our survival >> instinct. Positive reinforcement of computation, input, and output gives us >> the sensations of consciousness, qualia, and free will. Without them, life >> wouldn't be worth living and you would have fewer offspring. >> >> We know they are illusions because we can't objectively define them. The >> big tech companies that are actually making progress in AI know that what >> the brain is doing is computation, not magic. >> >> *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-M04ca0bb4aba60e3dec22e63a> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T206dd0e37a9e7407-Mfbb4ae92b4aa10adac34a07f Delivery options: https://agi.topicbox.com/groups/agi/subscription
