Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-25 Thread Ernest Galbrun
Dave,
Thank you for taking the time giving me these advices. I will give you my
opinion about your last point first, beacause I think it is the most
important point, stressing out what I really wish to achieve with this
project. I am perfectly aware that I am very naive and bold in my approach
to tackle this problem. That's the point. My project is not really a
computer-go project, it hasn't much to do with AI. It's about natural
selection only, the go game is a pretext. As such, my intent is not to
express my art in evolving an artificial neural network, it is to give my
players the same opportunities that our DNA ancestors had a few billion
years ago.

With that in mind, here is how I feel with the other points you mentionned :
- testing with smaller board would be wise indeed, and I am running a single
9x9 ecosystem (opengo don't support smaller board), the problem being that I
only have limited ressource and I think it is way funnier to evolve real go
players than ersatz practice-level contestants. Beside, if my approach ever
give any result, there is much meta-evolution that needs to occur first
(evolution about the efficiency of evolution), and this will probably take
as much time on a small board, this time being totally lost when/if I try to
scale-up.
- I will sure try to test it against other computer-go player. I have to
implement a GTP interface to my players, this is on my TODO list.
- There is, in theory, a way for any intern function to duplicate and be
used elsewhere through the definition of genes in my neural network. The
players will have to find out how to use this. And yes, I intend the players
to find by themselves about the simplest go principle, I think this is what
evolution is best at (you now, actually evolving).

Ernest Galbrun

On Tue, Feb 24, 2009 at 20:52, dhillism...@netscape.net wrote:

 Ernest,
 Fun stuff! I have a co-evolved neural net that used to play on KGS as
 “Antbot9x9”. I use the same net in the progressive widening part of my MCTS
 engine. I would guess that many people experiment along these lines but they
 rarely report results.

 Here are some suggestions that might be relevant:
 - If you test your approach on smaller board sizes you can get
 results orders of magnitude faster. 7x7 would be a good starting size. (If
 you use 5x5, make sure your super-ko handling is rock solid first.)
 - Take the strongest net at every generation and bench mark it
 against one or more computer opponents to measure progress over time.
 Suitable computer opponents would be light playouts (random), heavy playouts
 (a bit tougher), Wally (there’s nothing quite like getting trounced by the
 infamous Wally to goad one into a new burst of creativity), and Gnugo.  When
 you have a net you like, it can play against other bots online at CGOS and
 get a ranking.
 - Use a hierarchical architecture, or weight sharing or something
 to let your GA learn general principles that apply everywhere on the board.
 A self-atari move on one spot is going to be roughly as bad as on any other
 spot. You probably don’t want your GA to have to learn not to move into
 self-atari independently for every space on the board.
 - Use the “mercy rule” to end games early when one color has an
 overwhelming majority of the stones on the board.
 - Feed the net some simple features. To play well, it will have to
 be able to tell if a move would be self-atari, a rescue from atari, a
 capturing move,… Unless you think it might not need these after all, do you
 really want to wait for the net to learn things that are trivial to
 pre-calculate? You are probably reluctant to feed it any features at all. As
 a motivating exercise, you could try having the GA evolve a net to calculate
 one of those features directly.
 - GA application papers tend to convey the sense that the author
 threw a problem over a wall and the GA caught it and solved it for him.
 Really, there’s a lot of art to it and a lot of interactivity. Fortunately,
 that’s the fun part.
 - Dave Hillis



 -Original Message-
 From: Ernest Galbrun ernest.galb...@gmail.com
 To: computer-go computer-go@computer-go.org
 Sent: Tue, 24 Feb 2009 7:28 am
 Subject: Re: [computer-go] Presentation of my personnal project : e
 volution of an artificial go player through random mutation and natural
 selection

   I read a paper a couple years ago about a genetic algorithm to evolve
  a neural network for Go playing (SANE I think it was called?).  The
 network would output a value from 0 to 1 for each board location, and
 the location that had the highest output value was played as the next
 move.  I had an idea that the outputs could be sorted to get the X
 best moves, and that that set of moves could be used to direct a
 minimax or monte carlo search.  I haven't had the chance to prototype
 this, but I think it would be an interesting and possibly effective
 way to combine neural networks with the current Go 

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-24 Thread Colin Kern
On Tue, Feb 24, 2009 at 2:40 AM, Daniel Burgos dbur...@gmail.com wrote:
 Nice project!

 I worked on this some time ago. I did not use neural networks but patterns
 with feedback.

 The problem with feedback is that it is difficult to know when it reaches
 its final state. Usually you get oscillations and that state never happens.

 I tried to solve that using timeouts, but what I got were random players.

 How are you going to solve this?

 Dani

 2009/2/13 Ernest Galbrun ernest.galb...@gmail.com

 On Fri, Feb 13, 2009 at 22:42, Mark Boon tesujisoftw...@gmail.com wrote:

 Just curious, did you ever read 'On Intelligence' by Jeff Hawkins? After
 reading that I got rather sold on the idea that if you're ever going to
 attempt making a program with neural nets that behaves intelligently then it
 needs to have a lot of feed-back links. Not just the standard feed-forward
 type of networks. Some other good ideas in that book too IMO.
 Mark

 Oh, thank you for the advice, this is the kind of things that can very
 smoothly be implemented in the program, I will surely a/ buy and read this
 book and b/ introduce some feedback interaction in my neuronal network.
 I have not introduced it so far because it seemed some ineffective expense
 in calculation power.
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/


I read a paper a couple years ago about a genetic algorithm to evolve
a neural network for Go playing (SANE I think it was called?).  The
network would output a value from 0 to 1 for each board location, and
the location that had the highest output value was played as the next
move.  I had an idea that the outputs could be sorted to get the X
best moves, and that that set of moves could be used to direct a
minimax or monte carlo search.  I haven't had the chance to prototype
this, but I think it would be an interesting and possibly effective
way to combine neural networks with the current Go algorithms.

Colin
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-24 Thread Ernest Galbrun
 Nice project!

 I worked on this some time ago. I did not use neural networks but patterns
 with feedback.

 The problem with feedback is that it is difficult to know when it reaches
 its final state. Usually you get oscillations and that state never happens.

 I tried to solve that using timeouts, but what I got were random players.

 How are you going to solve this?


My philosophy with this project is that I dont try to solve anything, I only
provide the ecosystem with means to tackle the problem, but I don't have a
clue about how I would do this.

The way I allowed for feedback loops is that I process information once for
every move, without clearing internal states of the neurons, possibly
generating partial feedback.

I have made available a new version, quite light and simple to execute,
please feel free to try it if you are interested :
http://goia-hephaestos.blogspot.com/

Ernest.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-24 Thread Ernest Galbrun

 I read a paper a couple years ago about a genetic algorithm to evolve
 a neural network for Go playing (SANE I think it was called?).  The
 network would output a value from 0 to 1 for each board location, and
 the location that had the highest output value was played as the next
 move.  I had an idea that the outputs could be sorted to get the X
 best moves, and that that set of moves could be used to direct a
 minimax or monte carlo search.  I haven't had the chance to prototype
 this, but I think it would be an interesting and possibly effective
 way to combine neural networks with the current Go algorithms.

 Colin


This was a great achievement indeed, but although it might seem dumb, my
approach here is to be as ignorant as I can (not very difficult given my
knowledge in AI) of subtile and clever ways to make my players evolve. The
SANE algorithm has proven to be very powerful, but it needs some assumptions
to be true. As probably true this assumtpions are, I prefer to have none
and look at a really random evolution pattern.

Ernest.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-24 Thread Vlad Dumitrescu
On Tue, Feb 24, 2009 at 08:40, Daniel Burgos dbur...@gmail.com wrote:
 I worked on this some time ago. I did not use neural networks but patterns
 with feedback.

 The problem with feedback is that it is difficult to know when it reaches
 its final state. Usually you get oscillations and that state never happens.

 I tried to solve that using timeouts, but what I got were random players.

One way to handle this is to let the feedback loopbacks have a
significant atenuation, so that the system will eventually settle to
an equilibrium if the inputs don't change anymore. As with anything
else, YMMV.

regards,
Vlad
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-24 Thread dhillismail
Ernest,

Fun stuff! I have a co-evolved neural net that used to play on KGS as 
“Antbot9x9”. I use the same net in the progressive widening part of my MCTS 
engine. I would guess that many people experiment along these lines but they 
rarely report results.

 

Here are some suggestions that might be relevant:

- If you test your approach on smaller board sizes you can get results 
orders of magnitude faster. 7x7 would be a good starting size. (If you use 5x5, 
make sure your super-ko handling is rock solid first.)

- Take the strongest net at every generation and benchmark it against 
one or more computer opponents to measure progress over time. Suitable computer 
opponents would be light playouts (random), heavy playouts (a bit tougher), 
Wally (there’s nothing quite like getting trounced by the infamous Wally to 
goad one into a new burst of creativity), and Gnugo.  When you have a net you 
like, it can play against other bots online at CGOS and get a ranking.

- Use a hierarchical architecture, or weight sharing or something to 
let your GA learn general principles that apply everywhere on the board. A 
self-atari move on one spot is going to be roughly as bad as on any other spot. 
You probably don’t want your GA to have to learn not to move into self-atari 
independently for every space on the board.

- Use the =E
2mercy rule” to end games early when one color has an overwhelming majority of 
the stones on the board.

- Feed the net some simple features. To play well, it will have to be 
able to tell if a move would be self-atari, a rescue from atari, a capturing 
move,… Unless you think it might not need these after all, do you really want 
to wait for the net to learn things that are trivial to pre-calculate? You are 
probably reluctant to feed it any features at all. As a motivating exercise, 
you could try having the GA evolve a net to calculate one of those features 
directly.

- GA application papers tend to convey the sense that the author threw 
a problem over a wall and the GA caught it and solved it for him. Really, 
there’s a lot of art to it and a lot of interactivity. Fortunately, that’s the 
fun part.
- Dave Hillis


-Original Message-
From: Ernest Galbrun ernest.galb...@gmail.com
To: computer-go computer-go@computer-go.org
Sent: Tue, 24 Feb 2009 7:28 am
Subject: Re: [computer-go] Presentation of my personnal project : evolution of 
an artificial go player through random mutation and natural selection






I read a paper a couple years ago about a genetic algorithm to evolve


a neural network for Go playing (SANE I think it was called?).  The
network would output a value from 0 to 1 for each board location, and
the location that had the highest output value was playe
d as the next
move.  I had an idea that the outputs could be sorted to get the X
best moves, and that that set of moves could be used to direct a
minimax or monte carlo search.  I haven't had the chance to prototype
this, but I think it would be an interesting and possibly effective
way to combine neural networks with the current Go algorithms.

Colin

 

This was a great achievement indeed, but although it might seem dumb, my 
approach here is to be as ignorant as I can (not very difficult given my 
knowledge in AI) of subtile and clever ways to make my players evolve. The SANE 
algorithm has proven to be very powerful, but it needs some assumptions to be 
true. As probably true this assumtpions are, I prefer to have none and look 
at a really random evolution pattern.




Ernest.




___
omputer-go mailing list
omputer...@computer-go.org
ttp://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-23 Thread Daniel Burgos
Nice project!

I worked on this some time ago. I did not use neural networks but patterns
with feedback.

The problem with feedback is that it is difficult to know when it reaches
its final state. Usually you get oscillations and that state never happens.

I tried to solve that using timeouts, but what I got were random players.

How are you going to solve this?

Dani

2009/2/13 Ernest Galbrun ernest.galb...@gmail.com


 On Fri, Feb 13, 2009 at 22:42, Mark Boon tesujisoftw...@gmail.com wrote:

 Just curious, did you ever read 'On Intelligence' by Jeff Hawkins? After
 reading that I got rather sold on the idea that if you're ever going to
 attempt making a program with neural nets that behaves intelligently then it
 needs to have a lot of feed-back links. Not just the standard feed-forward
 type of networks. Some other good ideas in that book too IMO.
 Mark


 Oh, thank you for the advice, this is the kind of things that can very
 smoothly be implemented in the program, I will surely a/ buy and read this
 book and b/ introduce some feedback interaction in my neuronal network.

 I have not introduced it so far because it seemed some ineffective expense
 in calculation power.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread Ernest Galbrun
Hello,
I would like to share my project with you : I have developped a program
trying to mimic evolution through the competition of artificial go players.
The players are made of totally mutable artificial neural networks, and the
compete against each other in a never ending tournament, randomly mutating
and reproducing when they are successful. I have also implemented a way to
share innovation among every program. I am currently looking for additional
volunteer (we are 4 at the moment) to try this out.

If you are interested, pleas feel free to answer here, or directly email me.
I have just created a blog whose purpose will be to explain how my program
work and to tell how it is going.

(As for now, it has been running consistently for about a month, the players
are still rather passive, trying to play patterns assuring them the greatest
territory possible.)

Here is the url of my blog : http://goia-hephaestos.blogspot.com/

Ernest Galbrun
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread George Dahl
How do you perform the neuro-evolution?  What sort of genetic
operators do you have?  Do you have any sort of crossover?  How do you
represent the board and moves to the networks?

- George

On Fri, Feb 13, 2009 at 2:42 PM, Ernest Galbrun
ernest.galb...@gmail.com wrote:
 Hello,
 I would like to share my project with you : I have developped a program
 trying to mimic evolution through the competition of artificial go players.
 The players are made of totally mutable artificial neural networks, and the
 compete against each other in a never ending tournament, randomly mutating
 and reproducing when they are successful. I have also implemented a way to
 share innovation among every program. I am currently looking for additional
 volunteer (we are 4 at the moment) to try this out.
 If you are interested, pleas feel free to answer here, or directly email me.
 I have just created a blog whose purpose will be to explain how my program
 work and to tell how it is going.
 (As for now, it has been running consistently for about a month, the players
 are still rather passive, trying to play patterns assuring them the greatest
 territory possible.)
 Here is the url of my blog : http://goia-hephaestos.blogspot.com/
 Ernest Galbrun


 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread Ernest Galbrun
 How do you perform the neuro-evolution?  What sort of genetic
 operators do you have?  Do you have any sort of crossover?  How do you
 represent the board and moves to the networks?

 - George


- The evolution consists in the random mutation of each neurons : weight,
type of neurone, threshold, input and output adress ; the mutation
probability of each propertie can mutate as well, so that the individual can
eventually lock any important function.

- What do you mean by genetic operator ?

- Crossover is achieved through sexual reproduction. This method is always
tried first, and can only occur between individual belonging to the same
species. When two individual reproduce, they share randomly their genes,
each gene being defined as a set of neurons. If tsis leads to some network
error, the method is left and the two players are tagged as belonging to
different species.

- The game is fully handled by the opengo library :
http://www.inventivity.com/OpenGo/
Concerning my players, the board is represented by a 19x19 integer array,
each input going to one or more neuron input ; each case is also linked to
one neuron output, deciding the move on any given moment.


Ernest.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread Mark Boon
Just curious, did you ever read 'On Intelligence' by Jeff Hawkins?  
After reading that I got rather sold on the idea that if you're ever  
going to attempt making a program with neural nets that behaves  
intelligently then it needs to have a lot of feed-back links. Not just  
the standard feed-forward type of networks. Some other good ideas in  
that book too IMO.


Mark



On Feb 13, 2009, at 5:42 PM, Ernest Galbrun wrote:


Hello,

I would like to share my project with you : I have developped a  
program trying to mimic evolution through the competition of  
artificial go players. The players are made of totally mutable  
artificial neural networks, and the compete against each other in a  
never ending tournament, randomly mutating and reproducing when they  
are successful. I have also implemented a way to share innovation  
among every program. I am currently looking for additional volunteer  
(we are 4 at the moment) to try this out.


If you are interested, pleas feel free to answer here, or directly  
email me. I have just created a blog whose purpose will be to  
explain how my program work and to tell how it is going.


(As for now, it has been running consistently for about a month, the  
players are still rather passive, trying to play patterns assuring  
them the greatest territory possible.)


Here is the url of my blog : http://goia-hephaestos.blogspot.com/

Ernest Galbrun

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Presentation of my personnal project : evolution of an artificial go player through random mutation and natural selection

2009-02-13 Thread Ernest Galbrun
On Fri, Feb 13, 2009 at 22:42, Mark Boon tesujisoftw...@gmail.com wrote:

 Just curious, did you ever read 'On Intelligence' by Jeff Hawkins? After
 reading that I got rather sold on the idea that if you're ever going to
 attempt making a program with neural nets that behaves intelligently then it
 needs to have a lot of feed-back links. Not just the standard feed-forward
 type of networks. Some other good ideas in that book too IMO.
 Mark


Oh, thank you for the advice, this is the kind of things that can very
smoothly be implemented in the program, I will surely a/ buy and read this
book and b/ introduce some feedback interaction in my neuronal network.

I have not introduced it so far because it seemed some ineffective expense
in calculation power.
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/