Fair enough question. I'm involved directly in designing pseudo code (systems models, policies, and logic in a computational format). Second, I was a 4GL developer and worked directly as a professional in systems dev and systems engineering for 22 years (my own R&D excluded). I currently employ a small development team to set up the dev environment for my practical case. I'm more hands-on than my time permits, but that means I'm learning a lot about google and specialized plugins and how all works together. As such, I've identified the need to develop a custom-encryption system to protect the data with. This is possible via a semantic application of my NLU. In this sense, semantic means something else.
The next step would be to start coding the actual NLU - already being deployed for many years - as well as other, mature frameworks, which would form the layered, reasoning/unreasoning backbone of the eventual system. All these frameworks are expressed as systems models in the NLU format. First level = collecting, translating, and normalization to knowledge-maturity level 5 (my own hierarchies). Second level has application for evolutionary systems. In the 1st and 2nd level events, I would employ the best I (and hopefully my co-funders) would be able to afford to start implementing the series of designs and algorithms, which already exist in design format. I might even team up with a university and their postgrad programs. Except for not yet having been able to resolve IP issues, this has been explored over a number of years and it seems highly feasible. Robert Benjamin ________________________________ From: Stefan Reich via AGI <[email protected]> Sent: Monday, 04 February 2019 6:48 PM To: AGI Subject: Re: [agi] The future of AGI Thanks for your input, it's interesting. Are you involved in any code production? (Sorry if I should know already...) Stefan Am Mo., 4. Feb. 2019 16:58 hat Nanograte Knowledge Technologies <[email protected]<mailto:[email protected]>> geschrieben: Hi Stefan I meant that there seems to be a popular view emerging, which nudges in the direction of rethinking the prevailing architectural approach towards enabling agi. It further means I'm recognizing how the pattern might be shifting, and that I'm in support of such a view. In my opinion - and with respect to the incredible effort that has gone into such ventures - attempting to duplicate the human brain was never a sound-enough approach. Such a fallible organ. Modern-day, real-time language translators offer sufficient advancement in NLU, does it not? I like your suggestion about converging around image/audio recognition and learning logic as a single unit of cognition (perhaps). The latest AI can accurately read lips at a distance. Furthermore, apps now perform facial recognition from among crowds and track those faces. Some AI apps monitor and analyze bio-metric forces (electo-magnetic forces) around the body and other visible human characteristics as tell-tale indicators of inner intent and emotional states. It helps to identify potential criminals and deceivers. In addition, many computer games have shown a reactive-learning capability based on cause-effect scenarios. And then you go and casually plonk in the mother lode - evolutionary algorithms. This is the exact point at which I restate the likely need of a radical new approach. If we cannot express computational evolution in terms of recombination and diversification, we may have not yet managed to cross our own, intellectual Abyss. As some suggested here (in my own words); we are inherently restricted by our own human-reasoning universe. Is constructive reasoning about an unreasoning universe the required level of super-positional madness designers should attain, or should we rather entice the machine to indulge itself accordingly? Maybe then, a bit of both. I think, first, we should ourselves evolve via recombination, not adaptation. Morphing, not mimicking. If researchers and designers voluntarily became agi, perhaps we would understand it a little better. Sure, the world would probably reject us and call us nuts (as was done with Tesla), but they would still appropriate our output. Such a radical approach. How to do our damndest not to try and make any sense of it at all, purely relying on our collective ken and instinct. Some say ancient-astronautical mindsets, merely following in the footprints that were already laid down for those who would follow after and read the signs. Only time would tell. I'm enjoying the journey. The destination is not my concern. There is no more right, or wrong. Only to be correct in every instance of a moment presented to our manifestation (in the sense of a physical artifact with identity). In my lifetime I'd love to synergize with fellow pilgrims though. I see a think tank of the quality that Alexander Graham Bell founded and where scientists and intellectuals and inventors and passionate others flocked to. I think, this is how humankind might get closer to manifesting agi. Robert Benjamin ________________________________ From: Stefan Reich via AGI <[email protected]<mailto:[email protected]>> Sent: Monday, 04 February 2019 2:01 PM To: AGI Subject: Re: [agi] The future of AGI > Many commentators here agreed (over time) how agi development requires a > radically-different approach to all other computational endeavors to date. Not sure what that means. A really good NLU will go a very long way, and then we'll have to find a new "magic learner" module that replaces neural networks, both for image/audio recognition and learning logic. I suggest evolutionary algorithms. On Mon, 4 Feb 2019 at 05:45, Nanograte Knowledge Technologies <[email protected]<mailto:[email protected]>> wrote: Perhaps it's because, for its exponential complexity, agi defies theoretical science. If no executable, framework of computational intelligence exists, what's the use of being able to run at the speed of light? Many commentators here agreed (over time) how agi development requires a radically-different approach to all other computational endeavors to date. As evidenced, developing a feasible approach (in the sense of a platform) would require at least 10 years of R&D. In my opinion, that is correct. In my case it took more than 22 years - part-time. Towards an agi prototype then, with 10-years' concentrated effort, perhaps another additional 5-7 years? Perhaps we should start pooling our research and resources with those who offer the best 10-year result to date? I'm beginning to think this would be the best way forward. Imagine a safe, inclusive, collaborative environment where R&D parties could post real problems they needed solving and tangible credit was given to the authors of such solutions? We're talking sharing in the pot of gold at the end of the rainbow off course. Except for those sticky-finger, big boys who do not play well with others at all. I'm quite certain they monitor this list trying to farm it yet never contributing one bit of usefulness to others. Those we should weed out from any "collaborative" setup at every opportunity. They are only in it for themselves, not for the industry, or the benefit of the world. Yes, you know who you are! This is the extent of my professional opinion. Robert Benjamin ________________________________ From: Linas Vepstas <[email protected]<mailto:[email protected]>> Sent: Monday, 04 February 2019 6:16 AM To: AGI Subject: Re: [agi] The future of AGI I have no clue what Peter is actually thinking because he's coy and secretive. But I'm not pessimistic. I'm just perplexed why no one ever seems to try the obvious things. Or why I can never seem to explain obvious things to anyone and have them understand it. I am quite certain that one can do better than neural nets and more easily, too, an have explained exactly how more times than I can count, but my words are not connecting with anyone who understands them. So, whatever. Day at a time. --linas On Sun, Feb 3, 2019 at 5:28 PM <[email protected]<mailto:[email protected]>> wrote: I’m not that pessimistic at all. Our own AGI project has made steady progress over the past 17 years in spite of only spending about $10 million – about 150 man-years of focused effort. We’ve managed to successfully commercialize an early version of our proto-AGI engine in a company that now employs about 100 people www.smartaction.com<http://www.smartaction.com> . For the last 5 years my full-time team of about 10 people has been working on the next generation engine www.AGIinnovations.com<http://www.AGIinnovations.com> / www.Aigo.ai<http://www.Aigo.ai> . We are now ready to commercialize this more advanced platform. Our focus has been limited to natural language comprehension/ learning, question answering/ inference, and conversation management. I think that $100 million could go a long way towards functional, demonstrable proto AGI. It seems to me that DeepMind hasn’t made good use of the $200 or $300million spend so far – they lack a proper theory of intelligence. I don’t know why Vicarious, the other well-funded AGI company, hasn’t made better progress in perception/ action – my guess, for the same reason…. I think all of the theoretical calculations of processing power are widely off the mark – we’re not trying to reverse-engineer a bird – just need to build a flying machine. My articles are here: https://medium.com/@petervoss/my-ai-articles-f154c5adfd37 Peter Voss From: Linas Vepstas <[email protected]<mailto:[email protected]>> Sent: Friday, February 1, 2019 10:26 PM To: AGI <[email protected]<mailto:[email protected]>> Subject: Re: [agi] The future of AGI Thanks Matt, very nice post! We're on the same wavelength, it seems. -- Linas On Thu, Jan 31, 2019 at 3:17 PM Matt Mahoney <[email protected]<mailto:[email protected]>> wrote: When I asked Linas Vepstas, one of the original developers of OpenCog led by Ben Goertzel, about its future, he responded with a blog post. He compared research in AGI to astronomy. Anyone can do amateur astronomy with a pair of binoculars. But to make important discoveries, you need expensive equipment like the Hubble telescope. https://blog.opencog.org/2019/01/27/the-status-of-agi-and-opencog/ Opencog began 10 years ago in 2009 with high hopes of solving AGI, building on the lessons learned from the prior 12 years of experience with WebMind and Novamente. At the time, its major components were DeStin, a neural vision system that could recognize handwritten digits, MOSES, an evolutionary learner that output simple programs to fit its training data, RelEx, a rule based language model, and AtomSpace, a hypergraph based knowledge representation for both structured knowledge and neural networks, intended to tie together the other components. Initial progress was rapid. There were chatbots, virtual environments for training AI agents, and dabbling in robotics. The timeline in 2011 had OpenCog progressing through a series of developmental stages leading up to "full-on human level AGI" in 2019-2021, and consulting with the Singularity Institute for AI (now MIRI) on the safety and ethics of recursive self improvement. Of course this did not happen. DeStin and MOSES never ran on hardware powerful enough to solve anything beyond toy problems. ReLex had all the usual problems of rule based systems like brittleness, parse ambiguity, and the lack of an effective learning mechanism from unstructured text. AtomSpace scaled poorly across distributed systems and was never integrated. There is no knowledge base. Investors and developers lost interest…. -- cassette tapes - analog TV - film cameras - you -- Stefan Reich BotCompany.de // Java-based operating systems Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / see discussions<https://agi.topicbox.com/groups/agi> + participants<https://agi.topicbox.com/groups/agi/members> + delivery options<https://agi.topicbox.com/groups/agi/subscription> Permalink<https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M37bb5ec401e3504b1050e67c> ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M561e357fd8c28c19a73533e4 Delivery options: https://agi.topicbox.com/groups/agi/subscription
