Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Ingo Althöfer
--- Original-Nachricht 
 Datum: Mon, 17 Jan 2011 21:46:41 +0800
 Von: Aja ajahu...@gmail.com
 
 Designing 19*19 problems is mainly because the board is big 
 enough to put more examples. Or I think, yes,  9*9 should be enough. 


Side question: How difficult would it be to design
a program that generates such bot-difficult semeais
(or at least building blocks for such) automatically?

This program would not need to be a good go program...

Ingo
-- 
GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit 
gratis Handy-Flat! http://portal.gmx.net/de/go/dsl
___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


[Computer-go] semeai example of winning rate

2011-01-18 Thread Jacques Basaldúa

Ingo Althöfer wrote:

 How difficult would it be to design a program
 that generates such bot-difficult semeais
 (or at least building blocks for such)
 automatically?

Computer generated problems are not new. Thomas
Wolf, the author of Go Tools generated a set
of tsumego problems automatically that were very
interesting and an important part in the
development of Go Tools.

He also shared the problem set with other
researchers.

I used it around 2005, before I went MCTS,
I planned to implement something with strong
local searches linked by deterministic logic
that solved redundancies (territories counted
more than once by different cluster searches.)

The idea was abandoned long time ago.


Jacques.





___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Brian Sheppard
 How difficult would it be to design a program that generates such
bot-difficult semeais

We already have one: it is called CGOS. :-)

Seriously, whenever two bots play they will drift into a situation that they
disagree on. The loser is guaranteed to learn something.

Pebbles logs two positions from every loss:

1) The position in which it forecast the highest win rate for the
selected move.
2) The last time it forecast a win rate  50% for the selected move.

Sometimes these positions occur in the opening. In this case, Pebbles left
book and found that it was losing and never changed its opinion. Such cases
are not very interesting.

But often Pebbles thinks it will win 80% or more, yet still loses.

By playing overnight, Pebbles can create many more examples than I can
analyze in one day.

Brian

___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Stefan Kaitschick



Side question: How difficult would it be to design
a program that generates such bot-difficult semeais
(or at least building blocks for such) automatically?


Only when bots can solve essential positions, do you need random 
problems to make sure that the answers are robust.

Right now, there are a lot af basic positions that bots can't solve.
And when programmers try to add code that does, the question is, does 
this make the program stronger?
The main problem right now is that a lot of smart code actually 
weakens the program.
It would be nice, if the programmers reported more on their failures, so 
that the same promising dead end isn't entered multiple times.
But that's not exactly human nature. The most we hear is I already 
tried, doesn't work in response to suggestions.
The effect is, that on the most difficult problems every programmer is a 
pioner.


Stefan
___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Brian Sheppard
It would be nice, if the programmers reported more on their failures, so 
that the same promising dead end isn't entered multiple times.

It would be nice to hear about more experiments.

There are three classes of researchers, listed from best to worst:

1) Undergraduates. These are the best researchers, because they must write a
paper regardless of whether they succeed or fail. So you always find out
what happened.

2) Other academics. Generally, these researchers only publish when they have
a success. If we are lucky, we might find out about something that failed,
but only if there is a success to report.

3) Non-academics. We will never tell our secrets. An academic will have to
discover them independently, and then we may insinuate that we tested that
idea years ago, but discarded it because we found something better.

:-)

I don't see much downside in trying ideas multiple times. In this field, the
devil is in the details. Let people try multiple things, in many ways,
from many angles. They might find something worthwhile. At a minimum they
will learn something.

Brian


___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Kahn Jonas

On Tue, 18 Jan 2011, Stefan Kaitschick wrote:




Side question: How difficult would it be to design
a program that generates such bot-difficult semeais
(or at least building blocks for such) automatically?


Only when bots can solve essential positions, do you need random problems to 
make sure that the answers are robust.

Right now, there are a lot af basic positions that bots can't solve.
And when programmers try to add code that does, the question is, does this 
make the program stronger?
The main problem right now is that a lot of smart code actually weakens the 
program.


You know, in this case and for life-and-death in general, maybe
programmers should be cautious when rejecting changes because it makes
the programm (slightly) weaker.
Maybe they already are cautious, but what I mean is that:
we are here essentially trying to patch a bot weakness. The usual way to
test strength is by playing other bots. Since they also have the
weakness, and anyhow won't steer the game in that direction, patching
that weakness does not make a huge difference against them, and can
easily be overshadowed by any collateral change, such as being slightly
slower. On the other hand, it might still make them stronger against
humans.
I know that opponent non-transitivity is usually exaggerated, but in
this case, I would not be surprised if there was something like that.

Jonas
___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Brian Sheppard
You know, in this case and for life-and-death in general, maybe
programmers should be cautious when rejecting changes because it makes
the program (slightly) weaker.

What you say makes sense. But for life-and-death the real problem is that we
don't know of an effective procedure that integrates with the overall
framework.

Programmers realize this, and they want to address the problem in a
comprehensive way. So when a thorny problem comes up, say semeais and ko or
approach moves, we realize that we are not implementing a general solution
when we patch that special case. In that situation, we decide that we will
implement a special case if it seems to make the program stronger.

It's a reasonable approach while we await the next breakthrough.

Keep in mind that general methods get far too much credit for success. The
real strength of our programs isn't exactly UCT or even MCTS. It is more
that those frameworks allow for domain-specific adaptations while preserving
asymptotic convergence properties under remarkably general circumstances.

The same is true of chess  alpha-beta, and neural networks  backgammon,
and double-dummy  bridge, and linear programming  poker, and libraries 
checkers. In every case it is the clever adaptation of the framework to the
domain that extracts full value.

Brian


___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go


Re: [Computer-go] semeai example of winning rate

2011-01-18 Thread Stuart A. Yeates
On Wed, Jan 19, 2011 at 6:41 AM, Brian Sheppard sheppar...@aol.com wrote:
 Seriously, whenever two bots play they will drift into a situation that they
 disagree on. The loser is guaranteed to learn something.

 Pebbles logs two positions from every loss:

        1) The position in which it forecast the highest win rate for the
 selected move.
        2) The last time it forecast a win rate  50% for the selected move.

 Sometimes these positions occur in the opening. In this case, Pebbles left
 book and found that it was losing and never changed its opinion. Such cases
 are not very interesting.

 But often Pebbles thinks it will win 80% or more, yet still loses.

That's probably a great way to find positions that are interesting for
the development Pebbles. Variations that generate more generally
interesting methods include:

* Analysing dan-level games using those same metrics
* Using multiple bots to analyse dan-level games and using those
positions which are found to be interesting by multiple bots (also
works with wildly different versions of the same bot)

cheers
stuart
___
Computer-go mailing list
Computer-go@dvandva.org
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go