> The problems start in strong AI, however, when you try to reconcile > things like "beginning, cause, one vs. many, > sameness/difference/likeness, complete vs. incomplete, possible, > potential..." etc etc etc. Just considering one of these is fine, > one can usually make sense out of it, but the problem is that all > these concepts are concurrently taken up in something in the world. > How do you even begin to work all of that together? If the approach > is emergence, nobody does, they just place hope in a clever learning > scheme can determine those things. It might work -- I'm not knocking > evolutionary learning algorithms. It might not though, and then it's > back to head scratching on these long standing philosophy issues, like > the potential vs actual, appearance in relation to existence... on and > on like that.
This is another example of the relativistic nature of concepts. (Concepts are not just relative to other concepts, they are relativistic.) The only way they can be reasonably resolved is by using them relative to some basis. The problem with this explanation is that, typically, you are going to be aware of more than one basis. But the bases are themselves relativistic so a method that might make sense in a kind of case may not be so reasonable in some particular examples of the category. I believe this is why AGI has been so slow to develop. (I reject the idea that there has never been an AGI program, I just think that the AGI in AI programs have been too weak to get much traction.) It seems obvious that if the bases for all higher thought are themselves relativistic then the foundations of thought will be permeated with flaws and won't be able to withstand much building. I think it is a misinterpretation to say that thought requires a ground of sensory input and the evidence for this is that the grounding issue has been widely known since the 1980s and yet progress in AGI has been slower than Moore's Law (in my opinion) since then. The structure of thought has to be fluid, but it has to be a slow fluid. My conjecture is that almost all reasoning, even something that has been habituated, requires some imagination. Jim Bromer On Thu, Aug 8, 2013 at 12:08 PM, Mike Archbold <[email protected]> wrote: > On 8/8/13, John Rose <[email protected]> wrote: >>> -----Original Message----- >> >>> From: Matt Mahoney [mailto:[email protected]] >> >>> >> >>> But he was banned from the list. If you saw his rambling posts about >>> alien >> brain >> >>> implants and other paranoia, you would understand. >> >>> >> >> >> >> That was Eric who was prevented from posting, not really banned merely >> because he couldn’t control his spontaneous outbursts. Part of the idea >> with not banning people is from the fact that sometimes it is difficult to >> distinguish between genius and insanity. They overlap, or one appears as >> the >> other and they mix together sometimes. Or insanity arises from the social >> destruction of genius. >> >> >> >> There are some people on this list that say the same things over and over. >> So? Possibly they just need to be inspired to say something different and >> demeaning them doesn’t do the job. I could say every day in different >> permutations that 0=0 but then when the day comes that I finally realize >> that 0=∞ or nothing equals everything then it all starts making sense. >> Making that leap turns your world upside down thus right-siding you up. >> >> >> >> BTW I really am exploring that world of 0=∞ J The relationship between big >> and small. Something has to tie it all together, you know, the >> micro-universe and the macro-universe. Even if it is just a situational >> world-modeling self-adjustment. Approximations can be discrete. And we >> build >> on those approximations using them as “patterns” for other approximations >> even if those approximations are inappropriate. The mind does that. >> Attempting to mimic and re-apply correlational patterns that exist in the >> logical physical universe from an approximational standpoint. There is a >> man >> in the moon so I talk to him sometimes… and he answers, in a different >> voice. People say it’s irrational… but am I acting thus or are they >> exhibiting a misperception of audaciousness, from a viewpoint of restricted >> intellectual liberty. >> >> >> >> John >> >> > > > Part of the problem with metaphysics is that people regard as just so > much bullshit. It's unfortunate the guys like Hegel and Heidegger > really did write a lot of that. Ridiculous statements like "So in > its relation to Essence, Being has lost its being, has become > Illusory" can be found on the Science of Logic wikipedia (at one point > I tried to make it understandable, and then gave up to a "real" > Hegelians, for some entertainment, see the "Talk" under science of > logic). > > The problems start in strong AI, however, when you try to reconcile > things like "beginning, cause, one vs. many, > sameness/difference/likeness, complete vs. incomplete, possible, > potential..." etc etc etc. Just considering one of these is fine, > one can usually make sense out of it, but the problem is that all > these concepts are concurrently taken up in something in the world. > How do you even begin to work all of that together? If the approach > is emergence, nobody does, they just place hope in a clever learning > scheme can determine those things. It might work -- I'm not knocking > evolutionary learning algorithms. It might not though, and then it's > back to head scratching on these long standing philosophy issues, like > the potential vs actual, appearance in relation to existence... on and > on like that. > >> >> >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae >> Modify Your Subscription: >> https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com >> > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-f5817f28 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
