New versions of built-in connection pool is attached to this mail.
Now client's startup package is received by one of listener workers and
postmater knows database/user name of the recevied connection and so is
able to marshal it to the proper connection pool. Right now SSL is not
supported.
On Thu, May 17, 2018 at 9:09 PM, Bruce Momjian wrote:
>> However, I think that's probably worrying about the wrong end of the
>> problem first. IMHO, what we ought to start by doing is considering
>> what a good architecture for this would be, and how to solve the
>> general problem of per-backen
On Fri, May 4, 2018 at 03:25:15PM -0400, Robert Haas wrote:
> On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure wrote:
> > If we are breaking 1:1 backend:session relationship, what controls
> > would we have to manage resource consumption?
>
> I mean, if you have a large number of sessions open, i
On 05.05.2018 00:54, Merlin Moncure wrote:
On Fri, May 4, 2018 at 2:25 PM, Robert Haas wrote:
On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure wrote:
If we are breaking 1:1 backend:session relationship, what controls
would we have to manage resource consumption?
I mean, if you have a large
On Fri, May 4, 2018 at 5:54 PM, Merlin Moncure wrote:
>> I mean, if you have a large number of sessions open, it's going to
>> take more memory in any design. If there are multiple sessions per
>> backend, there may be some possibility to save memory by allocating it
>> per-backend rather than pe
On 04.05.2018 18:22, Merlin Moncure wrote:
On Thu, May 3, 2018 at 12:01 PM, Robert Haas wrote:
On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure wrote:
What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.
I have to
On Fri, May 4, 2018 at 2:25 PM, Robert Haas wrote:
> On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure wrote:
>> If we are breaking 1:1 backend:session relationship, what controls
>> would we have to manage resource consumption?
>
> I mean, if you have a large number of sessions open, it's going to
On Fri, May 4, 2018 at 11:22 AM, Merlin Moncure wrote:
> If we are breaking 1:1 backend:session relationship, what controls
> would we have to manage resource consumption?
I mean, if you have a large number of sessions open, it's going to
take more memory in any design. If there are multiple ses
On Thu, May 3, 2018 at 12:01 PM, Robert Haas wrote:
> On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure wrote:
>> What _I_ (maybe not others) want is a
>> faster pgbouncer that is integrated into the database; IMO it does
>> everything exactly right.
>
> I have to admit that I find that an amazing
On 03.05.2018 20:01, Robert Haas wrote:
On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure wrote:
What _I_ (maybe not others) want is a
faster pgbouncer that is integrated into the database; IMO it does
everything exactly right.
I have to admit that I find that an amazing statement. Not that
p
On Fri, Apr 27, 2018 at 4:43 PM, Merlin Moncure wrote:
> What _I_ (maybe not others) want is a
> faster pgbouncer that is integrated into the database; IMO it does
> everything exactly right.
I have to admit that I find that an amazing statement. Not that
pgbouncer is bad technology, but saying
On 27.04.2018 23:43, Merlin Moncure wrote:
On Fri, Apr 27, 2018 at 11:44 AM, Konstantin Knizhnik
wrote:
On 27.04.2018 18:33, Merlin Moncure wrote:
On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
wrote:
On 27.04.2018 16:49, Merlin Moncure wrote:
I'm confused here...could be language
On Fri, Apr 27, 2018 at 11:44 AM, Konstantin Knizhnik
wrote:
>
>
> On 27.04.2018 18:33, Merlin Moncure wrote:
>> On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
>> wrote:
>>> On 27.04.2018 16:49, Merlin Moncure wrote:
>> I'm confused here...could be language issues or terminology (I'll look
On 27.04.2018 18:33, Merlin Moncure wrote:
On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
wrote:
On 27.04.2018 16:49, Merlin Moncure wrote:
*) How are you pinning client connections to an application managed
transaction? (IMNSHO, this feature is useless without being able to do
that)
On Fri, Apr 27, 2018 at 10:05 AM, Konstantin Knizhnik
wrote:
> On 27.04.2018 16:49, Merlin Moncure wrote:
>> *) How are you pinning client connections to an application managed
>> transaction? (IMNSHO, this feature is useless without being able to do
>> that)
>
> Sorry, I do not completely underst
On 27.04.2018 16:49, Merlin Moncure wrote:
On Thu, Apr 26, 2018 at 6:04 AM, Konstantin Knizhnik
wrote:
On 25.04.2018 20:02, Merlin Moncure wrote:
Yep. The main workaround today is to disable them. Having said that,
it's not that difficult to imagine hooking prepared statement creation
to a
On Wed, Apr 25, 2018 at 10:09 PM, Michael Paquier wrote:
> On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:
>> The difficulty of finding them all is really the problem. If we had a
>> reliable way to list everything that needs to be moved into session
>> state, then we could try to co
On Thu, Apr 26, 2018 at 6:04 AM, Konstantin Knizhnik
wrote:
> On 25.04.2018 20:02, Merlin Moncure wrote:
>> Yep. The main workaround today is to disable them. Having said that,
>> it's not that difficult to imagine hooking prepared statement creation
>> to a backend starting up (feature: run X,Y
On 26.04.2018 05:09, Michael Paquier wrote:
On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:
The difficulty of finding them all is really the problem. If we had a
reliable way to list everything that needs to be moved into session
state, then we could try to come up with a design
On 25.04.2018 20:02, Merlin Moncure wrote:
Would integrated pooling help the sharding case (genuinely curious)?
I don't quite have my head around the issue. I've always wanted
pgbouncer to be able to do things like round robin queries to
non-sharded replica for simple load balancing but it do
On Wed, Apr 25, 2018 at 03:42:31PM -0400, Robert Haas wrote:
> The difficulty of finding them all is really the problem. If we had a
> reliable way to list everything that needs to be moved into session
> state, then we could try to come up with a design to do that.
> Otherwise, we're just swattin
On Wed, Apr 25, 2018 at 2:58 PM, Robert Haas wrote:
> On Wed, Apr 25, 2018 at 10:00 AM, Merlin Moncure wrote:
>> systems. If we get that tor free I'd be all for it but reading
>> Robert's email I'm skeptical there are easy wins here. So +1 for
>> further R&D and -1 for holding things up based o
On Wed, Apr 25, 2018 at 10:00 AM, Merlin Moncure wrote:
> systems. If we get that tor free I'd be all for it but reading
> Robert's email I'm skeptical there are easy wins here. So +1 for
> further R&D and -1 for holding things up based on full
> transparency...no harm in shooting for that, but
On Tue, Apr 24, 2018 at 1:00 PM, Konstantin Knizhnik
wrote:
> My expectation is that there are very few of them which has session-level
> lifetime.
> Unfortunately it is not so easy to locate all such places. Once such
> variables are located, them can be saved in session context and restored on
>
On Wed, Apr 25, 2018 at 9:43 AM, Christophe Pettus wrote:
>
>> On Apr 25, 2018, at 07:00, Merlin Moncure wrote:
>> The limitations headaches that I suffer with pgbouncer project (which
>> I love and use often) are mainly administrative and performance
>> related, not lack of session based server
> On Apr 25, 2018, at 07:00, Merlin Moncure wrote:
> The limitations headaches that I suffer with pgbouncer project (which
> I love and use often) are mainly administrative and performance
> related, not lack of session based server features.
For me, the most common issue I run into with pgboun
things
from a cost/benefit perspective (IMO).
merlin
I did more research and find several other think which will not work
with current built-in connection pooling implementation.
One you have mentioned: notification mechanism. Another one is advisory
locks. Right now I have now idea how to suppo
On Wed, Apr 25, 2018 at 12:34 AM, Christophe Pettus wrote:
>
>> On Apr 24, 2018, at 06:52, Merlin Moncure wrote:
>> Why does it have to be completely transparent?
>
> The main reason to move it into core is to avoid the limitations that a
> non-core pooler has.
The limitations headaches that I
s
widely using temporary tables.
So them can not use pgbouncer and number of clients can be very larger
(thousands).
Built-in connection pooling will satisfy their needs. And the fact that
random() in polled connection will return different values is absolutely
unimportant for them.
So my point
> On Apr 24, 2018, at 06:52, Merlin Moncure wrote:
> Why does it have to be completely transparent?
Well, we have non-transparent connection pooling now, in the form of pgbouncer,
and the huge fleet of existing application-stack poolers. The main reason to
move it into core is to avoid the l
On 23.04.2018 23:14, Robert Haas wrote:
On Wed, Apr 18, 2018 at 9:41 AM, Heikki Linnakangas wrote:
Well, may be I missed something, but i do not know how to efficiently
support
1. Temporary tables
2. Prepared statements
3. Sessoin GUCs
with any external connection pooler (with pooling level o
On Tue, Apr 24, 2018 at 9:52 AM, Merlin Moncure wrote:
>
> Why does it have to be completely transparent? As long as the feature
> is optional (say, a .conf setting) the tradeoffs can be managed. It's
> a reasonable to expect to exchange some functionality for pooling;
> pgbouncer provides a 're
On Mon, Apr 23, 2018 at 3:14 PM, Robert Haas wrote:
> In other words, transparent connection pooling is going to require
> some new mechanism, which third-party code will have to know about,
> for tracking every last bit of session state that might need to be
> preserved or cleared. That's going
On 23.04.2018 21:56, Robert Haas wrote:
On Fri, Jan 19, 2018 at 11:59 AM, Tomas Vondra
wrote:
Hmmm, that's unfortunate. I guess you'll have process the startup packet
in the main process, before it gets forked. At least partially.
I'm not keen on a design that would involve doing more stuff
On Mon, Apr 23, 2018 at 09:53:37PM -0400, Bruce Momjian wrote:
> On Mon, Apr 23, 2018 at 09:47:07PM -0400, Robert Haas wrote:
> > On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian wrote:
> > > So, instead of trying to multiplex multiple sessions in a single
> > > operating system process, why don't w
On Mon, Apr 23, 2018 at 09:47:07PM -0400, Robert Haas wrote:
> On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian wrote:
> > So, instead of trying to multiplex multiple sessions in a single
> > operating system process, why don't we try to reduce the overhead of
> > idle sessions that each have an ope
On Mon, Apr 23, 2018 at 7:59 PM, Bruce Momjian wrote:
> So, instead of trying to multiplex multiple sessions in a single
> operating system process, why don't we try to reduce the overhead of
> idle sessions that each have an operating system process? We already
> use procArray to reduce the numb
On Fri, Apr 20, 2018 at 11:40:59AM +0300, Konstantin Knizhnik wrote:
>
> Sorry, may we do not understand each other.
> There are the following facts:
> 1. There are some entities in Postgres which are local to a backend:
> temporary tables, GUCs, prepared statement, relation and catalog caches,...
On Wed, Apr 18, 2018 at 9:41 AM, Heikki Linnakangas wrote:
>> Well, may be I missed something, but i do not know how to efficiently
>> support
>> 1. Temporary tables
>> 2. Prepared statements
>> 3. Sessoin GUCs
>> with any external connection pooler (with pooling level other than
>> session).
>
>
On Fri, Jan 19, 2018 at 11:59 AM, Tomas Vondra
wrote:
> Hmmm, that's unfortunate. I guess you'll have process the startup packet
> in the main process, before it gets forked. At least partially.
I'm not keen on a design that would involve doing more stuff in the
postmaster, because that would inc
> 19 апр. 2018 г., в 23:59, Andres Freund написал(а):
>
> I think there's plenty things that don't really make sense solving
> outside of postgres:
> - additional added hop / context switches due to external pooler
> - temporary tables
> - prepared statements
> - GUCs and other session state
+
>> I understand your customers like to have unlimited number of
>> connections. But my customers do not. (btw, even with normal
>> PostgreSQL, some of my customers are happily using over 1k, even 5k
>> max_connections).
>
> If you have limited number of client, then you do not need pooling at
> a
On Fri., 20 Apr. 2018, 06:59 Andres Freund, wrote:
> On 2018-04-19 15:01:24 -0400, Tom Lane wrote:
> > Only after you can say "there's nothing wrong with this that isn't
> > directly connected to its not being in-core" does it make sense to try
> > to push the logic into core.
>
> I think there's
On 20.04.2018 12:02, Tatsuo Ishii wrote:
I understand your customers like to have unlimited number of
connections. But my customers do not. (btw, even with normal
PostgreSQL, some of my customers are happily using over 1k, even 5k
max_connections).
If you have limited number of client, then
This is only applied to external process type pooler (like Pgpool-II).
> - temporary tables
> - prepared statements
> - GUCs and other session state
These are only applied to "non session based" pooler; sharing a
database connection with multiple client connections.
On 20.04.2018 11:16, Tatsuo Ishii wrote:
On 20.04.2018 01:58, Tatsuo Ishii wrote:
I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler
This is only applied to external process type pooler (like
> On 20.04.2018 01:58, Tatsuo Ishii wrote:
>>> I think there's plenty things that don't really make sense solving
>>> outside of postgres:
>>> - additional added hop / context switches due to external pooler
>> This is only applied to external process type pooler (like Pgpool-II).
>>
>>> - temporar
notice that authentication is are where I have completely
no experience.
So any suggestions or help in developing right authentication mechanism
for built-in connection pooling is welcome.
Right authentication of pooled session by shared backend is performed in
the same way as by normal
On 20.04.2018 01:58, Tatsuo Ishii wrote:
I think there's plenty things that don't really make sense solving
outside of postgres:
- additional added hop / context switches due to external pooler
This is only applied to external process type pooler (like Pgpool-II).
- temporary tables
- prepar
espace, ...
Definitely pooled session memory footprint depends on size of
catalog,
prepared statements, updated GUCs,... but 10-100kb seems to be a
reasonable estimation.
>
> BTW, you are doing various great work -- autoprepare,
multithreaded Postgres, built-in co
>Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
The branch (as of 0020c44195992c6dce26baec354a5e54ff30b33f) passes pgjdbc
tests: https://travis-ci.org/vlsi/pgjdbc/builds/368997672
Current tests are mostly single-threa
Christopher>One of the things that they find likable is that by having the
connection
pool live
Christopher>in the framework alongside the application is that this makes
it easy to
attach
Christopher>hooks so that the pool can do intelligent things based on
application-aware
logic.
I'm afraid I do
> On Fri, Apr 20, 2018 at 07:58:00AM +0900, Tatsuo Ishii wrote:
>> Yeah. Since SCRAM auth is implemented, some connection poolers
>> including Pgpool-II are struggling to adopt it.
>
> Er, well. pgpool is also taking advantage of MD5 weaknesses... While
> SCRAM fixes this class of problems, and
On Fri, Apr 20, 2018 at 07:58:00AM +0900, Tatsuo Ishii wrote:
> Yeah. Since SCRAM auth is implemented, some connection poolers
> including Pgpool-II are struggling to adopt it.
Er, well. pgpool is also taking advantage of MD5 weaknesses... While
SCRAM fixes this class of problems, and channel bi
> I think there's plenty things that don't really make sense solving
> outside of postgres:
> - additional added hop / context switches due to external pooler
This is only applied to external process type pooler (like Pgpool-II).
> - temporary tables
> - prepared statements
> - GUCs and other ses
On 2018-04-19 15:01:24 -0400, Tom Lane wrote:
> Only after you can say "there's nothing wrong with this that isn't
> directly connected to its not being in-core" does it make sense to try
> to push the logic into core.
I think there's plenty things that don't really make sense solving
outside of p
On Thu, 19 Apr 2018 at 10:27, Dave Cramer wrote:
> It would be useful to test with the JDBC driver
> We run into issues with many pool implementations due to our opinionated
nature
Absolutely.
And Java developers frequently have a further opinionated nature on this...
A bunch of Java framework
Stephen Frost writes:
> Greetings,
> * Andres Freund (and...@anarazel.de) wrote:
>> On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:
>>> However, I suspect that dealing with *all* of the issues is going to be hard
>>> and tedious. And if there are any significant gaps, things that don't wor
Greetings,
* Andres Freund (and...@anarazel.de) wrote:
> On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:
> > On 18/04/18 06:10, Konstantin Knizhnik wrote:
> > > But there are still use cases which can not be covered y external
> > > connection pooler.
> >
> > Can you name some? I understa
On 2018-04-18 06:36:38 -0400, Heikki Linnakangas wrote:
> On 18/04/18 06:10, Konstantin Knizhnik wrote:
> > But there are still use cases which can not be covered y external
> > connection pooler.
>
> Can you name some? I understand that the existing external connection
> poolers all have their li
g,
> prepared statements, updated GUCs,... but 10-100kb seems to be a
> reasonable estimation.
>
>
> >
> > BTW, you are doing various great work -- autoprepare, multithreaded
> Postgres, built-in connection pooling, etc. etc., aren't you? Are you
> doing all of t
namespace, ...
Definitely pooled session memory footprint depends on size of catalog,
prepared statements, updated GUCs,... but 10-100kb seems to be a
reasonable estimation.
BTW, you are doing various great work -- autoprepare, multithreaded Postgres,
built-in connection pooling, etc. etc., aren
des Database Resident Connection Pooling (DRCP). I guessed you
were inspired by this.
https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc002.htm#ADMIN12348
BTW, you are doing various great work -- autoprepare, multithreaded Postgres,
built-in connection pooling, etc. etc., aren
> 18 апр. 2018 г., в 16:24, David Fetter написал(а):
>
> On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:
>> Yandex team is following this approach with theirOdysseus
>> (multithreaded version of pgbouncer with many of pgbouncer issues
>> fixed).
>
> Have they opened the so
On 18.04.2018 16:41, Heikki Linnakangas wrote:
On 18/04/18 07:52, Konstantin Knizhnik wrote:
On 18.04.2018 13:36, Heikki Linnakangas wrote:
On 18/04/18 06:10, Konstantin Knizhnik wrote:
But there are still use cases which can not be covered y external
connection pooler.
Can you name some
On 18/04/18 07:52, Konstantin Knizhnik wrote:
On 18.04.2018 13:36, Heikki Linnakangas wrote:
On 18/04/18 06:10, Konstantin Knizhnik wrote:
But there are still use cases which can not be covered y external
connection pooler.
Can you name some? I understand that the existing external connecti
On 18.04.2018 16:24, David Fetter wrote:
On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:
Yandex team is following this approach with theirOdysseus
(multithreaded version of pgbouncer with many of pgbouncer issues
fixed).
Have they opened the source to Odysseus? If not, d
will work
incorrectly. But still I do not think that making built-in connection
pooling really reliable is something unreachable.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
On Wed, Apr 18, 2018 at 02:52:39PM +0300, Konstantin Knizhnik wrote:
> Yandex team is following this approach with theirOdysseus
> (multithreaded version of pgbouncer with many of pgbouncer issues
> fixed).
Have they opened the source to Odysseus? If not, do they have plans to?
Best,
David.
--
On 18 April 2018 at 19:52, Konstantin Knizhnik
wrote:
> As far as I know most of DBMSes have some kind of internal connection
> pooling.
> Oracle, for example, you can create dedicated and non-dedicated backends.
> I wonder why we do not want to have something similar in Postgres.
I want to, and
On 18.04.2018 13:36, Heikki Linnakangas wrote:
On 18/04/18 06:10, Konstantin Knizhnik wrote:
But there are still use cases which can not be covered y external
connection pooler.
Can you name some? I understand that the existing external connection
poolers all have their limitations. But are
On 18/04/18 06:10, Konstantin Knizhnik wrote:
But there are still use cases which can not be covered y external
connection pooler.
Can you name some? I understand that the existing external connection
poolers all have their limitations. But are there some fundamental
issues that can *only* be
users which I know myself,
lack of standard (built-in?) connection pooling is one of the main
drawbacks of PostgreSQL.
Right now we have pgbouncer which is small, fast and reliable but
- Doesn't allow you to use prepared statements, temporary table and
session variables.
- Is single threade
t;>
>> Development in built-in connection pooling will be continued in
>> https://github.com/postgrespro/postgresql.builtin_pool.git
>> I am not going to send new patches to hackers mailing list any more.
>>
>
> Why?
>
>
> Just do not want to spam hackers
On 13.04.2018 19:07, Nikolay Samokhvalov wrote:
On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik
mailto:k.knizh...@postgrespro.ru>> wrote:
Development in built-in connection pooling will be continued in
https://github.com/postgrespro/postgresql.builtin_pool.git
On Fri, Apr 13, 2018 at 2:59 AM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
> Development in built-in connection pooling will be continued in
> https://github.com/postgrespro/postgresql.builtin_pool.git
> I am not going to send new patches to hackers mailing list any more.
>
Why?
several ports for
accepting pooled connections, while leaving main Postgres port for
dedicated backends.
Each session pool is intended to be used for particular database/user
combination.
Sorry, wrong patch was attached.
Development in built-in connection pooling will be continued in
https
On 06.04.2018 20:00, Konstantin Knizhnik wrote:
Attached please find new version of the patch with several bug fixes
+ support of more than one session pools associated with different ports.
Now it is possible to make postmaster listen several ports for
accepting pooled connections, while lea
Attached please find new version of the patch with several bug fixes +
support of more than one session pools associated with different ports.
Now it is possible to make postmaster listen several ports for accepting
pooled connections, while leaving main Postgres port for dedicated backends.
Eac
On Fri, Feb 9, 2018 at 4:14 PM, Shay Rojansky wrote:
> Am a bit late to this thread, sorry if I'm slightly rehashing things. I'd
> like to go back to the basic on this.
>
> Unless I'm mistaken, at least in the Java and .NET world, clients are
> almost always expected to have their own connection
Am a bit late to this thread, sorry if I'm slightly rehashing things. I'd
like to go back to the basic on this.
Unless I'm mistaken, at least in the Java and .NET world, clients are
almost always expected to have their own connection pooling, either
implemented inside the driver (ADO.NET model) or
Attached please find new version of built-in connection pooling
supporting temporary tables and session GUCs.
Also Win32 support was added.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
diff --git a/src/backend/catalog/namespace.c b/src
Konstantin>I do not have explanation of performance degradation in case of
this
particular workload.
A) Mongo Java Client uses a connection-pool of 100 connections by default.
That is it does not follow "connection per client" (in YCSB terms), but it
is capped by 100 connections. I think it can be
On 01.02.2018 23:28, Vladimir Sitnikov wrote:
> config/pgjsonb-local.dat
Do you use standard "workload" configuration values?
(e.g. recordcount=1000, maxscanlength=100)
Yes, I used default value for workload. For example, workload-A has the
following settings:
# Yahoo! Cloud System Bench
> config/pgjsonb-local.dat
Do you use standard "workload" configuration values?
(e.g. recordcount=1000, maxscanlength=100)
Could you share ycsb output (e.g. for workload a)?
I mean lines like
[TOTAL_GC_TIME], Time(ms), xxx
[TOTAL_GC_TIME_%], Time(%), xxx
>postgresql-9.4.1212.jar
Ok, you have re
On 01.02.2018 16:33, Vladimir Sitnikov wrote:
Konstantin>I have not built YCSB myself, use existed installation.
Which pgjdbc version was in use?
postgresql-9.4.1212.jar
Konstantin>One of the main problems of Postgres is significant degrade
of performance in case of concurrent write access
Konstantin>I have not built YCSB myself, use existed installation.
Which pgjdbc version was in use?
Konstantin>One of the main problems of Postgres is significant degrade of
performance in case of concurrent write access by multiple transactions to
the same sows.
I would consider that a workload
On 01.02.2018 15:21, Vladimir Sitnikov wrote:
Konstantin>I have obtained more results with YCSB benchmark and
built-in connection pooling
Could you provide more information on the benchmark setup you have used?
For instance: benchmark library versions, PostgreSQL client version,
additio
Konstantin>I have obtained more results with YCSB benchmark and built-in
connection pooling
Could you provide more information on the benchmark setup you have used?
For instance: benchmark library versions, PostgreSQL client version,
additional/default benchmark parameters.
Konstantin>Po
I have obtained more results with YCSB benchmark and built-in connection
pooling.
Explanation of the benchmark and all results for vanilla Postgres and
Mongo are available in Oleg Bartunov presentation about JSON (at the
end of presentation):
http://www.sai.msu.su/~megera/postgres/talks
Bruce>Well, we could have the connection pooler disconnect those, right?
I agree. Do you think we could rely on all the applications being
configured in a sane way?
A fallback configuration at DB level could still be useful to ensure the DB
keeps running in case multiple applications access it. It
On Mon, Jan 29, 2018 at 04:02:22PM +, Vladimir Sitnikov wrote:
> Bruce>Yes, it would impact applications and you are right most applications
> could not handle that cleanly.
>
> I would disagree here.
> We are discussing applications that produce "lots of idle" connections, aren't
> we? That t
Bruce>Yes, it would impact applications and you are right most applications
could not handle that cleanly.
I would disagree here.
We are discussing applications that produce "lots of idle" connections,
aren't we? That typically comes from an application-level connection pool.
Most of the connectio
On Mon, Jan 29, 2018 at 11:57:36AM +0300, Konstantin Knizhnik wrote:
> Right now, if you hit max_connections, we start rejecting new
> connections. Would it make sense to allow an option to exit idle
> connections when this happens so new users can connect?
>
> It will require changes
On 28.01.2018 03:40, Bruce Momjian wrote:
On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
Yes, external connection pooling is more flexible. It allows to
perform pooling either at client side either at server side (or even
combine two approaches).>
Also external connection poolin
On Sun, Jan 28, 2018 at 03:11:25PM -0800, Ivan Novick wrote:
> > The simplest thing sounds like a GUC that will automitcally end a connection
>
> > idle for X seconds.
>
> Uh, we already have idle_in_transaction_session_timeout so we would just
> need a simpler version.
>
>
> Oh i s
> The simplest thing sounds like a GUC that will automitcally end a
connection
> > idle for X seconds.
>
> Uh, we already have idle_in_transaction_session_timeout so we would just
> need a simpler version.
>
Oh i see its in 9.6, AWESOME!
Cheers
On Sun, Jan 28, 2018 at 02:01:07PM -0800, Ivan Novick wrote:
> On Sat, Jan 27, 2018 at 4:40 PM, Bruce Momjian wrote:
>
> On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
> Right now, if you hit max_connections, we start rejecting new
> connections. Would it make sense to
On Sat, Jan 27, 2018 at 4:40 PM, Bruce Momjian wrote:
> On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
> Right now, if you hit max_connections, we start rejecting new
> connections. Would it make sense to allow an option to exit idle
> connections when this happens so new users ca
On Mon, Jan 22, 2018 at 06:51:08PM +0100, Tomas Vondra wrote:
> > Yes, external connection pooling is more flexible. It allows to
> > perform pooling either at client side either at server side (or even
> > combine two approaches).>
> > Also external connection pooling for PostgreSQL is not limit
1 - 100 of 126 matches
Mail list logo