Well... I think I have hunches just as you do. And I think we both express our hunches on here.
Diminishing returns is not really my theory.. I am just looking at alternative ways of viewing the datapoints. Let's say you have two computers and both of them focus only on solving local situations. At first they both play around the same level. Then you scale one of them in some way and mark the trend. We now know that one of them scales a certain way when solving local solutions. If you then take that computer and put it against a person.. the person is no longer just thinking about the local solution. They are thinking about strategy, they might make certain moves that test the computer's goals.. you have a whole different situation. The little green men reference looks like a dangerous use of Occam's Razor. In this case... someone says there are little green men under his house and shows me a bunch of datapoints to suggest it. I don't have any datapoints against it so my point is automatically invalidated? It seems there is a little more detail to Occam's Razor. When I saw "proven to be scalable", my first thought was that it was proven to be somehow practically scalable in order to beat a human or perhaps to solve the game. You even mentioned that God would draw with the computer. That kind of scalability seems related to solving the game, not to beating a human. I saw no evidence that it has a practical scalability to solve the game. And this seems like the kind of problem that could be intractable. Practicality is an issue. Now the topic has moved to scalable to beat a human and I disagree with the interpretation of the data. We are both interpreting data. Your data doesn't count as a theory.. where you reduced my theory to one that has no data. We are both interpreting the same data. Diminishing returns was just an example of something that could be a roadblock. I was questioning how this necessarily scales to humans. It seems more data is needed from MC-programs vs. humans to make a rigorous theory of scalability. So far.. the only scalability that seems proven is a case for solving the game... not beating humans. There is some point between that would most likely in my opinion lead to humans being beaten.. some amount of calculation before you solved it.. but the shape of this curve is something I am unsure of. It doesn't seem that unreasonable to question if there is a practical scalability.
_______________________________________________ computer-go mailing list [email protected] http://www.computer-go.org/mailman/listinfo/computer-go/
