Hi Jose, On Tue, Apr 27, 2021 at 12:07 AM Jose Ignacio Rodriguez-Labra < [email protected]> wrote:
> > Why is there so little support for AGI, anyways? I had always imagined > governments racing to create a digital mind. Is it just so hard > organizations don't believe it is possible at the moment? > There is very little agreement as to what AGI is, or how to build it, or if it is even safe to build it, or if there is something almost as good, if its profitable, and geo-political factors. Let me start with a slide from IBM: [image: SIRE.png] The above is a portion of the old IBM Watson that was famously used to compete with Ken Jennings on the Jeopardy! match. I call this the "popsicle stick" architecture. Many, many people have envisioned and built something similar. I don't quite want to speak for Ben, but I think he was envisioning something like this in the 2004 timeframe, placing PLN below the bottom-most arrow (PLN would do "probabilistic logic" reasoning; circa a similar time-frame Pei Wang's NARS was a "non-axiomatic reasoning system") I joined opencog circa 2008, and had built something like the above by 2010 or 2012 or so. One incarnation used a collection of "common-sense facts" from OpenCyc. So, what I built was not anywhere near as good as what IBM built: I worked alone; IBM threw 100+ programmers to code, polish, debug, test. Nor was I the only one working on such things: Ben had a team to create a talking dog via Unity 3D graphics. A half-dozen universities created similar systems for question-answering. Go to the Internet Archive circa 2010 and look at what MIT CSAIL was publishing at the time: you'll find similar charts. I believe that most universities with AI departments built something like the above, at some point in time. It was all very exciting! It was exciting, since it seemed like the right path to AGI. It was only a matter of engineering, or additional resources: just hook up all of these modules, provide the correct reasoning system (PLN, NARS, maybe Doug Lennat's microtheories from Cyc -- that started in the 1990's, for crisake!) Ben called it the Manhattan Project of AI, envisioning similar levels of funding and activity! That this won't work became clear to me circa 2012 or 2014. Some people figured out that this won't work much earlier. Others still think the above approach is a viable path to AGI. I believe that the reason the above won't work is because it depends on a brittle collection of knowledge bases curated by grad-students. It is incredibly difficult to make such knowledge bases complete, and bug-free. Early lessons were taught by "expert systems", as early as the 1980's: Early days of prolog, people built "medical experts" and "legal experts" and hey "petroleum bore-hole and geoexploration experts" that captured expert knowledge, from hundreds of petroleum experts, about what it means when the temperature is 180 degrees farenheit some 3000 meters down a borehole! Those systems worked OK-ish. But mostly were buggy, incomplete. Even though petroleum exploration is different than co-reference resolution or semantic labelling, or upper ontologies, the failure reasons are the same, in each case. That's the lesson. There's some geo-politics: circa 2012(??) some of the Manhatten-Project-of-AI believers convinced Brussels bureaucrats to invest billions of Euro into a University-Industrial Complex to build such a thing! Jobs for everyone! No European city left out! Money falls from the sky! I don't recall the name of the consortium. I forget the name of the thing; it was classic pork-barrel funding.. and, as a project, it failed. As a money-spout to certain institutions, it was wildly successful. Can fragile hand-built assemblages of expert systems and ontologies ever work? Well, if you try hard enough, sort-of. The Ken Jennings Jeopardy! match proved that, as did the early X-Prize competitions for self-driving cars. It should be noted that the best FSD is now from Tesla. Although its top-secret, it appears to be made from neural nets. (i.e. it is *not* a fragile, hand-built assemblage of expert systems) That this works well is not an accident. So is that the road to AGI? Well, back in the day (circa 2012, again) there was an idea that we could build a supercomputer to simulate a mouse brain, neuron by neuron, and we would have a thinking mouse, and make the supercomputer a little larger, and ta-dah, human-level intelligence. Yeah. Uhhh. Well, that did not work out. For starters, the wiring diagram of the mouse brain was not known. And now that we almost know it, we still don't know what it does. Every supercomputer simulation results in garbage, until it is compared to lab experiment. The lab experiments are a bottleneck to this path to AGI. Nothing special here: the same remarks apply to using supercomputers for weather simulation. People like Bengio still think that better neural nets are all we need. Hinton disagrees, and suggests unifying with symbolic AI. (So do I) Devil is in the details. No one seems to agree on the details. What I'm building is almost surely not what Hinton envisions. Ben does not agree with what I'm building, he's building something rather different. Ask 50 other AGI researchers, none agree with each other as to the correct approach. Doesn't help that AGI is strongly infested with cranks and crank writing. In physics and astronomy, with some formal education, and some intellectual effort, you can mostly tell apart the cranks from the smart ones. It's a lot harder in AGI. How do you secure funding in this environment? How do you build consensus? Then there is China. The slaughter bots https://www.youtube.com/watch?v=9CO6M2HsoIA There's algorithmic propaganda and facebook ... I mean, we can use social media algorithms to encourage cult-like thinking, things like QAnon, flat Earth, etc. Cult thinking is hostile to humanity: Jim Jones and Jonestown in Ghana. I talked about AI question answering systems, above, and about self-driving cars. There's vast amounts of money flowing into "delusional AI" -- proto-AGI systems designed to convince consumers to buy products, or vote for political parties. Sounds benign, but realize the Great Firewall of China falls into this class. The systems that Facebook and Youtube run must be viewed as a kind of proto-AGI, running partly in silicon, and partly in human brains, and they are every bit as liberating and dangerous as Jim Jones was. (Let's be clear: Jones was a hero, was a good guy ... he did the right things for the right reasons ... until he didn't. And then people died.) This is the problem with AGI. It is a kind of nuclear bomb for the brain. Are you prepared to explode a bunch of human brains? Is this really the path you want to walk down? --linas -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37rDmmkGShLiLrRsbLCbwnXoxyc6OwfRfbxv764VkyfNA%40mail.gmail.com.
