on 12/3/02 3:35 pm, Dan Minette at [EMAIL PROTECTED] wrote: > > ----- Original Message ----- > From: "Joshua Bell" <[EMAIL PROTECTED]> > To: "Brin-L" <[EMAIL PROTECTED]> > Sent: Tuesday, March 12, 2002 1:37 AM > Subject: Re: SCOUTED: Science Meets Spirituality, and Wireless Nanotech VR > > >> "Dan Minette" <[EMAIL PROTECTED]> wrote: >>> >>> Third, humans appear to be able to do things that have been proven to not >>> be >>> capable of being reduced to algorithms. >> >> Can you give an example that can't be handled by the following algorithm: >> >> - Make a handful of well-educated guesses a second >> - Filter using a massively parallel pattern matcher with billions of >> computational elements trained with the task for 18+ years >> - Repeat a few times. Say "Hmmmm...." while doing this. >> - After some stability threshold is reached, say "Aha!" >> - Try out the result. If it's wrong, say "Ooops!" and color your cheeks > red. > > The self reference stuff I had talked about has been proven to be impossible > to do algorithmically.
It hasn't. Every undergraduate in computer science learns (I assume) about Turing Machines, the Halting Problem, Godel, Skolem, predicate calculus and etcetera and so forth. Anyone who does an advanced degree in AI knows all this stuff backwards. > Dennett suggested a workaround for this. He > suggests that our mind goes through millions and billions of possible > theories in order to find a theory that handles the self-reference case > (this is a 3 year old memory so my exact understanding of his argument may > be a bit fuzzy.) > > If he is right, we don't make a handful of guesses per second, we make at > least thousands. And, we evaluate them in a very peculiar manner. We do a > very good job at getting to the results, but have absolutely no ability to > access the information in the intermediate steps. > > Further, as someone who has evaluated new ideas as part of his job, I find > the idea that I'm actually going through thousands per second, when it seems > to take me hours or days to properly evaluate a small subset of those ideas > rather implausible. I process algorithms subconsciously at a KHz rate, and > then have to painstakingly rework the steps that I've already done. Armchair psychology has been proven totally inadequate. Cognitive psychology experiments have shown that what actually happens in our minds is nothing like what we think happens in our minds. > >> I think you're underestimating how often humans get things wrong. > > Actually, that's not really the problem. Self referential is literally > impossible to do algorithmically. Algorithms are limited because they are an > expression of a formal system and any formal system is limited. Humans > thought does not show this limitation. In order to reduce human thought to > algorithms, one needs to show a manner in which such a limited system can > appear to transcend those limits. It is clear that any attempt to do so > requires the assumption of a lot of hidden horsepower that operates in an > extremely peculiar manner. > > >> >> That just sounds silly. A billion element parallel pattern matcher is > great >> at extracting the wheat from the chaff. It's even great at finding wheat >> where there isn't any (e.g. the Face on Mars). > > I'm clearly not communicating my thoughts. Lets go back to chess. Its as > though we had a tree search algorithm that utilized analysis of ply 4 to > find a branch to go down to 8 plys without considering the fact that at ply > 5 there is a catastrophic result that should have terminated the search. > Even the simplest computer algorithms don't make this mistake, and even the > best grandmasters do. This is strong evidence, IMHO, that they go about > things in different manners. Yes, chess programs and human players do go about things differently. Did anyone claim otherwise? What is your point? > > In addition, when asked, chess players do not talk about doing this kind of > search. Instead there is a general weighing of positions and a > painstakingly slow analsyis of selected decision trees. I know how long it > takes me to walk through a fairly simple decision tree compared to the speed > of a computer. I can beat a computer because of an ability that I have > which is not available to the computer. I can beat a chess program that plays less well than me, but I can't beat the best chess program. Probably only two or three people in the world could have managed that last year. On the other hand no human being can beat the best backgammon program and the best Scrabble program is generally reckoned to be World Champion quality if it were allowed to compete. [1] And on the other other hand, the best computer bridge program is still not very good...but it could still beat me. > > Its not that a mistake was made. Algorithm based systems, as well as > humans, certainly come up with the wrong results. I've written plenty of > code that has come up with erroneous results, as well has having made plenty > of mistakes. What is critical is the pattern of the mistakes. Human > thought has a pattern of insight and error that is at odds with what would > be expected from an algorithm based system. > > Dan M. > [1] From memory. -- William T Goodall [EMAIL PROTECTED] http://www.wtgab.demon.co.uk
