I had saw this a few years ago but had a fun read with it this time as I ate my
hotdog. I had mistakenly thought until the end it meant Learning/AI was useless
and all you need is more compute/ brute force, forget about AI, take up a job
in intel and AMD instead. It did say early programs were unneeded later and
wasted time once compute was suddenly ex. 10x more powerful, and Deep Learning
is even more powerful than simpler Learning methods. But it seems it meant
hardcoded knowledge built-in to the AI's mind, not the Learning methods. But I
still am going with it that yes extra compute can eliminate smaller programs
ex. chess. While still I think it would take a ton of compute/memory to not
need any intelligence, so most likely we need intelligent programs to sift
through the huge space, if no galaxy sized computer is available for eons of
use. And yes for those knowledge-engineers they are not really doing much so it
can discover, you do hardware a handful or goals and feed it some words of
advice along the way but it really does need massive domain data and even
outside domains to be able to stitch new solutions together like DALL-E does
complete the rest of an unseen new image you hand it. BTW I don't think Deep
Learning is more general purpose, I think the things I have on my project page
and OpenAI's AI is fully general purpose and simply needs more such mechanisms/
implementaion techniques. At least as long as you make it fully adaptive ex.
let it decide if even needs a 1 letter context, this I don't yet have
implemented, or Byte Pair Encoding if you will, which is similar really. Of
course you'll want to make full use of all available computation for your AGI,
less used is even worse the lesser, too little used.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Tba0a379254fb930c-M192fe6bff6be5154217cbef9
Delivery options: https://agi.topicbox.com/groups/agi/subscription