Hi everyone, I've googled and found a couple discussion threads on this topic, but I would still appreciate some more insights from the experts. The context is a small website (50-70k hits per day, with one pg_connect for every page and on average 5 queries / page) with postgresql's max_connections set to 100 (this seems high to me, I'll probably decrease that to 50 or so). All connections are made by the same user.
I'm aware of the rule: num_init_children * max_pool <= max_connections - #_superuser_connections (no query cancellation here) However having kept the default 32*4 increases significantly my memory footprint (by roughly 90M compared to postgresql standalone) and I'm trying to reduce that (small website => small server box). Now I've read on an archived ML thread that having max_pool > 1 is mostly relevant when the users connecting are different, so in theory I could lower by max_pool to 1 without diminishing throughput to postgresql (but memory footprint won't change much), am I right? In order to ensure minimum memory footprint I guess I'm looking at something like an apache style dynamic allocation of children when needed (say when requests start filling queues of current children's pools). Does this make sense? Is there a way to replicate (pun intended) this behavior? Regards, ED
_______________________________________________ Pgpool-general mailing list Pgpool-general@pgfoundry.org http://pgfoundry.org/mailman/listinfo/pgpool-general