"-- Capability or method X, if you could do it incredibly (i.e. unrealistically) well, would enable arbitrarily great general intelligence -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors THEREFORE... -- By pursuing more and more complex versions of X, we can get high levels (e.g. human-level) of real-world general intelligence Unfortunately, things just don't work that way ;/ ..." ------------------ Because it is an overgeneralization, I mostly agree with that statement. However, I strongly disagree that this is a fallacy for all cases because there are always some situations where technological advances will touch off other advances.
Perhaps human or animal-like intelligence will prove to be impossible but computers may become true depositories of ideas which might be accessed through thoughtful language-like exchanges. There may be an aspect of intelligence that, while it would not generate true higher intelligence, might be capable of significantly reducing the complexity of finding specific kinds of information. The web has been an example of that but it has not yet gotten to the point of understanding basic language. A major advancement in logic would definitely make it easier to use language to find knowledge on the web. I also think it will advance AGI as well. It is just a matter of time now. Jim Bromer On Thu, Jun 21, 2012 at 10:07 AM, Jim Bromer <[email protected]> wrote: > On Tue, Jun 19, 2012 at 9:31 PM, Ben Goertzel <[email protected]> wrote: > >> >> There's a general fallacy that misleads many AGI people, of the following >> form ... >> >> " >> -- Capability or method X, if you could do it incredibly (i.e. >> unrealistically) well, would enable arbitrarily great general intelligence >> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors >> THEREFORE... >> -- By pursuing more and more complex versions of X, we can get high >> levels (e.g. human-level) of real-world general intelligence >> " >> >> In other cases, X = logical reasoning, or pattern recognition, or >> automated program learning, or simulation, etc. etc. >> >> Unfortunately, things just don't work that way ;/ ... >> > > > Now that I have thought about this a little more, I have to say that I > disagree with it - except that it is an over-generalization. > We all know that there will be improvements in computer technology and > some of them will have an impact on AGI. No one in this group has > completely ruled out the possibility, for example, that there may be a > breakthrough in parallelism. We all see that the history of parallelism > was extremely disappointing and most of us are more than cautious about > making predictions based on a breakthrough in parallelism at this time. > > I have come to the conclusion that a general breakthrough is going to have > to be something that has broad consequences for computation. As a result I > have a stronger belief in the potential of a breakthrough in Logical > Satisfiability than I have ever had before. > > While it is true that things don't usually work that way, they do work > that way every once in a while. We just cannot see how they are going to > take shape before they do. > > Jim > > > >> > > On Tue, Jun 19, 2012 at 9:31 PM, Ben Goertzel <[email protected]> wrote: > >> >> There's a general fallacy that misleads many AGI people, of the following >> form ... >> >> " >> -- Capability or method X, if you could do it incredibly (i.e. >> unrealistically) well, would enable arbitrarily great general intelligence >> -- Simple versions of X, seem to lead to interesting "narrow AI" behaviors >> THEREFORE... >> -- By pursuing more and more complex versions of X, we can get high >> levels (e.g. human-level) of real-world general intelligence >> " >> >> In the case we're discussing here X = Prediction .. >> >> In other cases, X = logical reasoning, or pattern recognition, or >> automated program learning, or simulation, etc. etc. >> >> Unfortunately, things just don't work that way ;/ ... >> >> ben >> >>> <http://www.listbox.com> >>>>>>>>> <http://www.listbox.com> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>> -- >>>>> Full employment can be had with the stoke of a pen. Simply institute a >>>>> six hour workday. That will easily create enough new jobs to bring back >>>>> full employment. >>>>> >>>>> >>>> >>> -- >> Ben Goertzel, PhD >> http://goertzel.org >> >> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche >> >> >> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
