Jim,
> But does your faith include the possibility that your specific ideas might > be wrong? > Yes, I accept the possibility that some or all of my ideas might be wrong. > What we need is some way to evaluate the ideas that are being floated and > tried. The only evaluation that we have is to look at the promotions and > predictions that researchers are making and compare them with the results > that they are able to produce. We can also look at how people's agi models > have improved over the years. > > Can you make a better AGI program than your second life dog? If you > can't, even after all these years, then your prognostications are wrong > (and maybe you should try another approach). > I'm more interested in expending effort improving the basic infrastructure and algorithms of OpenCog, in ways that I think support fundamental progress toward AGI, than in making shiny demos to try to convince skeptics that the approach is viable.... The "second life dog" demo you refer to showcased only one OpenCog learning algorithm (MOSES, used in a certain way) and wasn't intended as some sort of summation of the capability of the underlying AI approach... I think OpenCog can work for AGI. I don't claim it's the only workable approach, and I would switch to another approach if I saw something that looked more promising to me. Once we can make a compelling demonstration that genuinely showcases synergetic interaction of various cognitive processes in OpenCog, we will do so; and that will indeed be quite satisfying. The fact that you, or other skeptics, think this is taking longer than it should, isn't particularly important to me.... You understand neither the underlying concepts nor the practical obstacles... Deep learning is getting a lot of attention lately, but I'm not yet convinced that current deep learning algorithms/architectures have significant potential beyond the domain of machine perception. (The general concept of "Deep learning" is surely universal; but there's no shortage of general concepts with universal applicability to cognition...) -- Ben G ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
