Hi, > Ben, do you feel that these changes to OpenCog will address the current > obstacles to AGI? What do you believe were the reasons why it did not meet > the goals of the 2011 timeline ( > https://www.nextbigfuture.com/2011/03/opencog-artificial-general-intelligence.html > ) which forecast "full-on AGI" IN 2019-21 and recursive self improvement in > 2021-23.
Making an OpenCog-based AGI is a large-scale software engineering project as well as a collection of coupled research projects. We never had the resources behind the project needed to pull it off from a pure engineering perspective, even if all the theory is correct.... Not saying this is/was the only weakness of the OpenCog design/project of that era, just noting that, any other potential shortcomings aside, lack of resources would have been a sufficient reason for not meeting those milestones... Those milestones were never proposed as being achievable independently of availability of resources to fund development. Deep Mind has eaten $2B+ of Google's budget, GPT3 cost substantially more just in processor time than the entire amount of $$ spent on OpenCog during its history, etc. There are weaknesses in the legacy OpenCog software architecture that would have stopped us from getting to human-level AGI using it, without recourse to a lot of awkward coding/design gymnastics ... but with more ample resources we would have been able to push to refactor / rebuild and remedy those weaknesses quite some time ago... >Obviously "rebuilding a lot of OpenCog from scratch" doesn't bode well. I am perplexed why you think this is obvious? To me it bodes quite well. We are aiming to do something large and complex and unprecedented here, it is hardly surprising or bad that midway through the quest we would want to take what we've learned along the journey so far and use it to radically improve the system. As a Mac user, I thought the transition from OS9 to OSX was a good one. A lot was rebuilt from scratch there, based on everything that had been learned before, based on the affordances allowed by modern hardware in the OSX era, etc. etc. > If I recall in 2011, OpenCog consisted of an evolutionary learner (MOSES), a > neural vision model (DeSTIN), a rule based language model (RelEX, NatGen), > and Atomspace, which was supposed to integrate it all together but never did > except for some of the language part. Distributed Atomspace also ran into > severe scaling problems. You have left out probably the most critical AGI component , the PLN Probabilistic Logic Networks reasoner... As for use of the Atomspace for integrating different AI modalities, for the last few years it's been way more advanced in the biomedical inference/learning domain than in NLP ... > I assume the design changes address these problems, but what about other > obstacles? MOSES and DeSTIN never advanced beyond toy problems because of > computational limits, but perhaps they could be distributed. After all, real > human vision is around 10^15 synapse operations per second [1], and real > evolution is 10^29 DNA copy OPS [2]. Do the design changes help with scaling > to parallel computing? Yeah there are two main aspects to the redesign -- new Atomese2 programming language, which is what the paper I just posted is working towards -- new Atomspace implementation which better leverages concurrent and distributed processing, and better interfaces real-time with NN learning frameworks (see e.g. Alexey Potapov's earlier papers on Cognitive Module Networks) A rough high level overview is in Section 6 of , https://arxiv.org/abs/2004.05267 , see also many documents at https://wiki.opencog.org/w/Hyperon > I never did understand why OpenCog went with rule based language modeling > after it's long history of failure. Problems like ambiguity, brittleness, and > most importantly, the lack of a learning algorithm, have only been solved in > practice with enormous neural/statistical models. SingularityNET team is doing a lot with transformer NNs in practical applications, and the weaknesses of the tech are also very well known, see e.g. https://multiverseaccordingtoben.blogspot.com/2020/07/gpt3-super-cool-but-not-path-to-agi.html https://www.technologyreview.com/2020/08/22/1007539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/ ... but you already know that stuff I suppose... The idea that logic-based systems intrinsically lack a learning algorithm is false, and the dichotomy btw reasoning and learning is also false. Some prototype work using symbolic and neural methods together for NLP is described here https://arxiv.org/abs/2005.12533 but that direction is paused for the moment to be re-initiated once Hyperon is ready > So I'm wondering if you have a new timeline, or have you adjusted your goals > and how you plan to achieve them? Goals are about the same as always. Giving a specific timeline seems not terribly worthwhile, mostly because of the dependence of timelines on resources. We have more resources than we did in 2011 for sure, but as you note the biggest practical leaps in AI are now being made by trillion dollar companies, and we are not yet near that level in OpenCog / SingularityNET. SingularityNET as a blockchain project w/ its own cryptocurrency and business model is an attempt to pull adequate resources into AGI development and deployment w/o selling out to megacorporations, but that's obviously its own story and I won't elaborate in depth here now.. If resources hold up OK then getting Hyperon developed to the stage of an "advanced alpha" where we can use it for AGI development while adding more features and tools, is probably 18-24 months of work from here. From that point to human-level AGI could be -- if the underlying theory is indeed correct -- some small integer number N of years, where N depends on the resources available for the project and also a lot of other factors... Basic development plan from an AGI / cog-sci perspective remains about the same as laid out in Engineering General Intelligence vol. 2. Collaboratively building stuff w/ blocks in a toy world was posited there as a good environment for experimenting w/ integration of all the OpenCog cognitive methods, and in fact we are now playing around w/ a Hyperon prototype in a Minecraft environment -- totally in that same spirit. Putting together domain-focused intelligence achieved via narrower OpenCog applications (like bio-AI applications and the Grace humanoid eldercare bot) w/ more general-purpose commonsense knowledge obtained from more integrative-cognition-oriented domains like Minecraft, also remains core to our plan... Not really trying to fully answer your questions in this email, due to my own time constraints (as well as endless work, Ruiting and I just had a new baby 4 days ago which is taking some time from my days!) ... but more trying to give relevant links for you or anyone on the list who's curious... -- Ben ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta5ed5d0d0e4de96d-Ma60e527818e16d312fdc8f0e Delivery options: https://agi.topicbox.com/groups/agi/subscription
