On Mar 19, 2009, at 1:47 AM, Ellis Wilson wrote:


Peter St. John wrote:
This article at Wired is about Go playing computers:
http://blog.wired.com/wiredscience/2009/03/gobrain.html
Includes a pic of a 24 node cluster at Santa Cruz, and a YouTube video of a
famous game set to music :-)

My beef, which started with Ken Thompson saying he was disappointed by how little we learned about human cognition from chess computers, is about
statements like this:

"People hoped that if we had a strong Go program, it would teach us how our
minds work. But that's not the case," said Bob
Hearn<http://www.dartmouth.edu/%7Erah/>,
a Dartmouth College artificial intelligence programmer. "We just threw brute
force at a program we thought required intellect."

And yet the article points out:

[our brain is an]...efficiently configured biological processor — sporting
1015 neural connections, capable of 1016 calculations per second

Our brains do brute-force massively distributed computing. We just aren't
conscious of most of it.

Peter

Peter,

I would agree with Ken in that it is a disappointing and ultimately
fruitless process to attempt to learn about human cognition by building a program to emulate some very specific activity of human beings. This line of thought, in its purest sense, is reductionism. While I do find artificial intelligence to be very interesting, I believe at some point or another we will have to recognize that the brain (and our subsequent
existence) is something more than the result of the perceivable atoms
therein. No viewpoint is completely objective as long as we are finite
human-beings and occupy a place in the world we perceive.


Well to avoid to have too much of a discussion on artificial intelligence
on a beowulf mailing list, as subjects in this range seems to pop up
a lot, being one of worlds leading experts here, let's drop my 2 cents.

First of all it is my impression that too much gets written about Artificial Intelligence by people who just base themselves upon literature and especially their wildest dreams.

A big problem of 99.99% of all publications being just wet dreams from professors and researchers and PHD thesis is that the field hardly progresses in the public
sector.

This is what you typically see. If someone writes about how to approach in
software the human brain, without EVER having written 1 line of code,
it is of course wishful thinking that the field EVER progresses.

Seriously 99.99 of all researches, sometimes bigtime funded, are like that.

Some of them, usually students, go a step further already and write software.

Yet again they usually are stubborn as hell and are just redoing each others
experiment in a zillion different incarnations.

With respect to 'learning' in chess, like 99.99% of all attempts you can
by means of deduction prove that they basically optimize a simple piece
square table. Really it is that bad. It is really 100k+ researches and nearly all of them are in fact a very INEFFICIENT form of parameter optimization.

This is why the field gets dominated by low level coders, sometimes,
especially past few years, mathematical creative types.

They manage to write something using KISS principles, but most of them have commercial goals and achievement goals in mind, very few of them have pure scientific goals in mind. This last for a simple reason; just like physicists it takes like 10 years to get some expertise in the field, even for the best, by the time then that they understand what happens there, they already have a job or work for some
sort of organisation, and publication of papers is not their intention.

Then what is there?

Well basically the research done, vaste majority with respect to how the brain works,
has been done in big secrecy.

They do progress from biological viewpoint: "how does the brain work".
Though majority doesn't say a word there of course, the few "public words" i exchange with researchers who examine brains (quite litterary using MRI type scanners and
such stuff) they definitely claim a lot of progress.

What gets said there makes sense.

Much larger field is of course the military type experiments. Now i directly want to mention that i find these experiments disgusting, revolting and in vaste majority of
cases total unnecessary.

The experiments involve monkeys. They all die. Experiments with brains is the most
disgusting form of experiments to animals.

I was in Ramat-Gan a few years ago. In the Bar-Ilan research center.

Greenpeace had some demonstrations there for a while, but after some time they left.
They should not have...

What the many military organisations do for type of experiments, probably even at humans there,
well you can safely assume, they get done.

Sometimes you see a discovery channel episode from the 50s. If they did do it back then, they sure
do now. Brain manipulation attempts i'd call them.

That's quite different to what i and many others try to do in software. That's APPROACHING the human
brain.

Now from the 70s and 80s of course there is already some primitive software there which just gets expanded for medical world; giving a diagnosis based upon guidelines programmed in the database.

yet that is quite a collection of small knowledge rules.

The big difference with a chess evaluation function, not to confuse with the search part which makes moves in algorithmic and nowadays massive parallel manner, the evaluation function is one big complex whole.

In my case i really wrote worlds largest and most complex evaluation function for the game of chess. When the years progress you get more insight in what is important there. You conclude then for specific things quit interesting phenomena's.

Such as that certain form of knowledg needs to get corrected with quite complex functions. In complex manners an indexatoin takes place to get information patterns from a position. Using these patterns, with n-th order logic you then
create new "overview" functions, just like a human is doing it.

That is highly effective.

Yet there is another aspect where mankind seems to be really superior. It is exactly what Jaap van den Herik has been claiming already years ago, that is the relative weight you give to patterns.

Being a titled chessplayer, of course i "estimate" myself first what it probably is for my program. You toy some and draw
a conclusion.

These parameter optimizations are not lineair programming optimizations. They are quite complex. In fact no good algorithms are there yet to get it done, as the number of parameters is too much (far over 10000). You can't independantly pick a value for a parameter and/or pin it down to a value, and then try to tune the other parameters. It's getting done that way, but it never
resulted in a real objective great parameter tuning.

A lot of progress gets booked here. Though of course the pure tuning in itself is less relevant than the bugfixing which is a result of it, scientifically seen a lot of experiments are possible there. I'm busy carrying out a plan there which will probably take years. The actual optimization i intend to run on some tens of thousands of cores alltogether. I found some manners to increase tuning speed. Note this is total scientific project, there is no direct 'benefit' for it in terms of elorating of the chess engine. So where scientifically it will look great maybe, most computerchess programmers will ignore it.

For the parameter tuning world it is very relevant however, as only past few years some progress gets booked here.

Yet the combination of the 2 things (lot of chess domain knowledge and accurate tuning), makes the chess engines a lot more human. Especially the tuning, as majority really doesn't have a very big
evaluation function.

Yet one shouldn't mistake itself that sometimes very big lookup tables can replace a lot of knowledge rules. For todays processors those tables are faster though than evaluating all that code. You can in short precalculate them.

A lookup from L1 usually and sometimes L2, is on average at most a few cycles, in most cases even the full latency gets prefetched and hidden full automatic by L1. That's great of todays processors.

So it looks bloody fast, the theory behind it is not so simple though. As you might see just that seemingly simple lookup table inside the code that is quickly searching at millions of chesspositions a second, in reality it replaces a lot of code.

Of course combined with the deep searches they do nowadays, it plays really strong. But that was the case end of 90s
also.

Yet there is still a lot possible there. The future will tell.

In todays computerchess, when not cut'n pasting existing codes, making your 'own' chess engine, it is really complicated to reach the top soon, as it takes 10 years to build a strong engine if you didn't make one before. So very few start that challenge
nowadays, which of course hurts the field bigtime now.

In 80s and 90s it was still possible, without being algorithmic strong, to have some sort of impact in the field being a good low level
programmer (say in assembler). That's sheer impossible nowadays.

In the long run i intend to publish something there, but i'm not 100% sure yet. A lot have tried climbing the Olympus and very few managed to reach the top.

Let's put it this manner, i see possibilities, which i discussed with others already (and they find it a great idea), to do parameter tuning in a generic human manner, yet all this extreme experimental attempts are only possible thanks
to todays huge crunching power.

Don't forget that.

In 90s most attempts there were always typically having such little calculation power, that the number of experiments done, already was so insignificant, that even worlds most brilliant tuningsalgorithm would have failed, as never statistical sureness was
used to be sure that a new tuning of the parameters would work.

Most "automatic learning" attempts trying to approach the human manner of learning, like TD learning (temporal difference learning), they already undertake action after 1 single game. That's of course way too little datapoints. Then the question is how human something is,
if it basically randomly flips a few parameters.

That is IMHO a very weak manner of tuning which expressed in big O, is exponential to the number of parameters, if you're feeling so lucky,
that is, to win the lotto anyway one day in your life.

Yet the bottom line is that todays huge number crunching solutions, such as gpgpu, with all its many cores, gives huge possibilities here, to really do a lot of experiments. In such case it is worth trying to tune thousands of parameters, which some years ago was just total impossible.

There was just too little crunching power for artificial intelligence to let clever algorithms have an impact.

To quote someone, Frans Morsch (won in 1995 world championship, beating for example deep blue), "One simply doesn't have the system time for sophisticated algorithmic tricks last few plies,
    when each node is far under 1000 clock cycles of the cpu".

So where it is true that most money incentives have gone away out of computerchess, the clever guys are still there and/or help others brainstorm with ideas what to try and how to progress, and the increase in computing power gives rise to new algorithmic and tuning possibilities to approach
the human mind in manners that is real simple.

Because after all, let's be realistic. If computers can't perfectly play the game of chess using sophisticated chessknowledge (so i do not mean having a database with 10^43 possibilities), how is an artificial intelligent program EVER going to drive a car from your home to a destination
in a safe manner.

In that computerchess will be a major contributor, simply because it is public what happens there. Maybe researchers are not very talkative, but you CAN speak to them if you want to and you DO see the progress they book.

Vincent

To say that all simulation of some portion of our thoughts is fruitless is incorrect, as I think some insight into the mind is possible through
codifying thought.  However, there exist far to many catch-22's and
logical fallacies in using the mind to understand the mind to ever fully
understand how it works from a scientific point of view.  Philosophy
will at some point have to step in to explain the (possibly huge) gaps
between even the future's fastest simulated brains and our own.

In a book by Thamas Nagel, "The View from Nowhere" I believe he puts it most poignantly by stating, "Eventually, I believe, current attempts to understand the mind by analogy with man-made computers that can perform
superbly some of the same external tasks as conscious beings will be
recognized as a gigantic waste of time".  This was written over twenty
years ago.  Science has given us tools to make our lives wonderfully
easier and thereby has proven to be useful, but it answers none of the
multitude of mind-body dilemmas, validates the reality of our
perception, nor will it or any other reductionist theory provide insight into the much more complex areas of cognition. This is especially true
with the discovery of quantum mechanics, which makes the observer's
subjective perception absolutely necessary.  Full objectivity (or in
this application full codification of human thought) just isn't possible.

I wish it weren't so, for by study I am a computer scientist and by
hobby philosopher, however, at present I remain skeptical.

Ellis Wilson








_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to