Christopher>One of the things that they find likable is that by having the
connection
pool live
Christopher>in the framework alongside the application is that this makes
it easy to
attach
Christopher>hooks so that the pool can do intelligent things based on
application-aware
logic.

I'm afraid I do not follow you. Can you please provide an example?

TL;DR:
1) I think in-application pooling would be required for performance reasons
in any case.
2) Out-of-application pooling (in-backend or in-the-middle) is likely
needed as well


JDBC clients use client-side connection pooling for performance reasons:

1) Connection setup does have overhead:
1.1) TCP connection takes time to init/close
1.2) Startup queries involve a couple of roundrips: "startup packet", then
"SET extra_float_digits = 3", then "SET application_name = '...' "
2) Binary formats on the wire are tied to oids. Clients have to cache the
oids somehow, and "cache per connection" is the current approach.
3) Application threads tend to augment "application_name", "search_path",
etc for its own purposes, and it would slow the application down
significantly if JDBC driver reverted application_name/search_path/etc for
each and every "connection borrow".
4) I believe there's non-zero overhead for backend process startup

As Konstantin lists in the initial email, the problem is backend itself
does not scale well with lots of backend processes.
In other words: it is fine if PostgreSQL is accessed by a single Java
application since the number of connections would be reasonable (limited by
the Java connection pool).
That, however, is not the case when the DB is accessed by lots of
applications (==lots of idle connections) and/or in case the application is
using short-lived connections (==in-app pool is missing that forces backend
processes to come and go).

Vladimir

Reply via email to