> > > - Why pgpool doesn't monitor when a backend comes back ? Must we have an > > > external monitoring software which call pcp_recoevr_node ? > > > > This is a design decision when pgpool was born. In my understanding > > many HA(High Availability) software systems do like this. The reason > > for this is if we allow this, some flakey hardwares (for example > > flakey network cables) might cause repeating fail over/fail back > > actions. > > Ok, I understand :) Anyway, do you have any monitoring preference ? (do > you have / do you know any scripts to make the monitoring and call > pgp_recovery ?)
I'm confused. To call pcp_recovery_node, the target backend must be in down status. > > Besides recovery, having different master/slave settings on those two > > pgpool servers might cause deadlock situation. Consider following > > scenario: pgpool1 sends "LOCK t1" to backend1 first since backend1 is > > the master for pgpool1 then sends the same SQL to backend2. At the > > same time pgpool2 sends "LOCK t1" to backend2 first since backend2 is > > the master for pgpool2 then sends the same SQL to backend1. As a > > result pgpool1 waits for completion of LOCK command on backend2, while > > pgpool2 waits for completion of LOCK command on backend1, which is a > > deadlock. > > > > I recommend you to have same settings for both pgpool1 and pgpool2: > > > > pgpool1 ==> master: backend1 , slave : backend2 > > pgpool2 ==> master: backend1 , slave : backend2 > > > > Now for the recovery part. The answer is either pgpool1 or ppool2 can > > perform the recovery. However while pgpool1(2) performs the recovery > > pgpool2(1) *must not* access backend1 and backend2. > > > > Ok, So i need to have the same master on all nodes. I guess if I have > more than 2 backends, I also need to keep them in the same order ? Yes. > Whn I tried the recovering, the 2nd pgpool doesn't detect that the > backend is came back. Yeah, it's the limitation of pgpool-II for now. > Lust I perform pcp_recovery on both pgpools ? No. Don't do it. > or > Just I need (for exemple) one pcp_recovery and smth like > pcp_attach_node on other pgpool ? Without testing it, I would say doing pcp_attach_node is enough. > > > - A more general question, my archive logs take lot of place. How much > > > archive_log files must we keep ? only the last ? Is there any > > > postgresql configuration capacities to set this rotation, or must I > > > use smthg like logrotate? > > > > Actually once you finish the recovery, you do not need to keep take > > archive logs. There are two ways for this. Change archive_mode to > > off(require PostgreSQL restarting). The other is set ''(null string) > > to archive_command. This will not require PostgreSQL restarting. Just > > pg_ctl reload is enough. I'm not sure if this works for all PostgreSQL > > versions though. > But what will happend if a server fails again ? You need to set archive_command again and do pg_ctl reload. -- Tatsuo Ishii SRA OSS, Inc. Japan _______________________________________________ Pgpool-general mailing list [email protected] http://pgfoundry.org/mailman/listinfo/pgpool-general
