I have some good and bad news in the my testings of 2.1 release (since
the latest CVS version does not work for me. Maybe this helps a bit):
 
The good news:
---------------
The connection blocking behavior I had when a failover happened was
because the failover_command was not returning (in pgpool.conf:
failover_command = '. failover_cmd $h %d &'). I replaced it with another
script which in turn calls the intended command (without & at the end).
That way, existent connections keep working, although there's a little
sleep time when the failover occurrs, which is not that bad.
 
The bad news:
--------------
When a failback happens, already opened clients would block forever, no
matter whether or not you have a failback_command. The ideal behavior
should be that the existent connections keep working without
interruption. I found the below code in pool_stream.c, and added
'child_exit(1)' to see if at least I can force the clients to exit, and
have them try reconnecting again.
 
Inside both:
'char *pool_read2(POOL_CONNECTION *cp, int len)' and 'int
pool_read(POOL_CONNECTION *cp, void *buf, int len)' functions:
 
[...]
 
else if (readlen == 0)
  {
   if (cp->isbackend)
   {
    pool_error("pool_read2: EOF encountered with backend");
    child_exit(1); // *** Added this for forcing clients to exit ***
    return -1;
 
[...]
 
------------------------
This change worked for me. It's not ideal, but at least it makes clients
connected not block forever. Might there be a way of instead of exiting
clients, just unblock them and have them continue with their queries?
 
Thanks,
Daniel
_______________________________________________
Pgpool-general mailing list
[email protected]
http://pgfoundry.org/mailman/listinfo/pgpool-general

Reply via email to