> Mike said: I think the whole issue of where to start in AGI is hopelessly > contentious, because there is no working AGI... A lot of people say > "such and such approach won't work" but there is only arguments to > support it -- no proof. There may never be proofs that some method won't work. The claim that some method won't work is not interesting, but the reasons for the claim may be. The problem is that it can be very difficult to determine which reasons are good reasons and which aren't. The implication that we should avoid the contentiousness is one thing, but the implication that we should not try to discover good reasons why we think an approach is weak is another. For example, I think my conceptual typed conceptual structure model makes a lot of sense and I would appreciate insightful comments. But if someone else does not understand what I am getting at then his criticisms will probably not be very useful. I was reading a paper the other day where the writer mentioned conceptual integration but the only method of integration that he mentioned was conceptual blending. So while he may have thought about things like conceptual-typed-conceptual structures, the fact that he did not realize that this could be used as a model for conceptual integration shows a potential weakness in his overall theories about AGI. I see conceptual typing as a fundamental factor of conceptual integration and by explicitly detailing it (instead of leaving it as an unstated implied application principle) we may be able to make some further advancements in discrete reasoning. Advancements in discrete reasoning would lead to advances in weighted reasoning as well. Jim Bromer
> Date: Thu, 25 Jul 2013 13:08:36 -0700 > Subject: Re: [agi] A Very Simple AGI Project > From: [email protected] > To: [email protected] > > I think the whole issue of where to start in AGI is hopelessly > contentious, because there is no working AGI other than a brain. So, > really, the neuroscience approach under that assumption is the most > promising. But, even though approaches cut corners, it seems, by > introducing a computable analog to the brain. A lot of people say > "such and such approach won't work" but there is only arguments to > support it -- no proof. > > On 7/25/13, Matt Mahoney <[email protected]> wrote: > > 1. AIXI won't solve AGI because AIXI is not computable, and good > > approximations are intractable beyond toy problems. > > > > 2. AIXI won't solve AGI because language, vision, robotics, and art > > are not goal directed optimization processes. Reinforcement learning > > is responsible for a very tiny fraction of what you know because it is > > a low bandwidth signal. > > > > 3. AIXI is not a good model of reinforcement learning in humans and > > other animals. A positive (negative) reinforcement signal makes you > > more (less) likely to repeat behavior that preceded it. That is not > > the same as rationally changing your behavior in a way that increases > > expected reinforcement. If they were the same thing, then your desire > > to use a drug would not depend on whether you have already tried it. > > > > 4. Not all reinforcement signals are the same. Nausea will make you > > less likely to eat something you ate an hour earlier. Electric shock > > would have a different effect. > > > > 5. AIXI apparently can't even play Pac Man very well. > > > > -- > > -- Matt Mahoney, [email protected] > > > > > > ------------------------------------------- > > AGI > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae > > Modify Your Subscription: > > https://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > > > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-f5817f28 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
