Robert Haas robertmh...@gmail.com writes:
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). And I'm not really sure why the PROCLOCKs need to be in a
hash table anyway - if we know the PROC and LOCK we can surely look up
the PROCLOCK pretty expensively by following
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Robert Haas robertmh...@gmail.com writes:
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a
hash table anyway - if we know the PROC and LOCK we can surely look up
the
Robert Haas robertmh...@gmail.com writes:
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Now, it's possible that you could avoid *ever* needing to search for a
specific PROCLOCK, in which case eliminating the hash calculation
overhead might be worth it.
That seems like it might be feasible. The
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Robert Haas robertmh...@gmail.com writes:
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Now, it's possible that you could avoid *ever* needing to search for a
specific PROCLOCK, in which case eliminating the hash calculation
overhead might be worth it.
That
Robert Haas robertmh...@gmail.com writes:
I wonder if it would be possible to have a very short critical section
where we grab the partition lock, acquire the heavyweight lock, and
release the partition lock; and then only as a second step record (in
the form of a PROCLOCK) the fact that we
Hi Tom
I suspect I may be missing something here, but I think it's a pretty
universal truism that cache lines are aligned to power-of-2 memory
addresses, so it would suffice to ensure during setup that the lower order n
bits of the object address are all zeros for each critical object; if the
On 7 December 2010 18:37, Robert Haas robertmh...@gmail.com wrote:
On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah jks...@gmail.com wrote:
That's exactly what I concluded when I was doing the sysbench simple
read-only test. I had also tried with different lock partitions and it
did not help since
On Tue, Dec 7, 2010 at 12:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I wonder if it would be possible to have a very short critical section
where we grab the partition lock, acquire the heavyweight lock, and
release the partition lock; and then only as
On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras ivo...@freebsd.org wrote:
On 7 December 2010 18:37, Robert Haas robertmh...@gmail.com wrote:
On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah jks...@gmail.com wrote:
That's exactly what I concluded when I was doing the sysbench simple
read-only test. I
On 7 December 2010 19:10, Robert Haas robertmh...@gmail.com wrote:
I'm not very familiar with PostgreSQL code but if we're
brainstorming... if you're only trying to protect against a small
number of expensive operations (like DROP, etc.) that don't really
happen often, wouldn't an atomic
2010/12/7 Robert Haas robertmh...@gmail.com
On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras ivo...@freebsd.org wrote:
I'm not very familiar with PostgreSQL code but if we're
brainstorming... if you're only trying to protect against a small
number of expensive operations (like DROP, etc.) that
2010/12/7 Віталій Тимчишин tiv...@gmail.com:
2010/12/7 Robert Haas robertmh...@gmail.com
On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras ivo...@freebsd.org wrote:
I'm not very familiar with PostgreSQL code but if we're
brainstorming... if you're only trying to protect against a small
number
2010/12/7 Віталій Тимчишин tiv...@gmail.com:
As far as I can see from the source, there is a lot of code executed under
the partition lock protection, like two hash searches (and possibly
allocations).
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). And I'm not
2010/12/7 Robert Haas robertmh...@gmail.com:
2010/12/7 Віталій Тимчишин tiv...@gmail.com:
As far as I can see from the source, there is a lot of code executed under
the partition lock protection, like two hash searches (and possibly
allocations).
Yeah, that was my concern, too, though Tom
On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras ivo...@freebsd.org wrote:
The sbwait part is from FreeBSD - IPC sockets, but so much blocking on
semwait indicates large contention in PostgreSQL.
I can reproduce this. I suspect, but cannot yet prove, that this is
contention over the lock manager
On Mon, Dec 6, 2010 at 12:10 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras ivo...@freebsd.org wrote:
The sbwait part is from FreeBSD - IPC sockets, but so much blocking on
semwait indicates large contention in PostgreSQL.
I can reproduce this. I
On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras ivo...@freebsd.org wrote:
The sbwait part is from FreeBSD - IPC sockets, but so much blocking on
semwait indicates large contention in PostgreSQL.
I can reproduce this. I
On Tue, Dec 7, 2010 at 10:59 AM, Jignesh Shah jks...@gmail.com wrote:
On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras ivo...@freebsd.org wrote:
The sbwait part is from FreeBSD - IPC sockets, but so much blocking on
semwait
On 11/22/10 18:47, Kevin Grittner wrote:
Ivan Vorasivo...@freebsd.org wrote:
It looks like a hack
Not to everyone. In the referenced section, Hellerstein,
Stonebraker and Hamilton say:
any good multi-user system has an admission control policy
In the case of PostgreSQL I understand the
Ivan Voras wrote:
PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale
factor of 500 (7.5 GB database)
with pgbench -S (SELECT-queries only) the performance curve is:
-c#result
433549
864864
1279491
1679887
2066957
2452576
2850406
3249491
40
On 26 November 2010 03:00, Greg Smith g...@2ndquadrant.com wrote:
Two suggestions to improve your results here:
1) Don't set shared_buffers to 10GB. There are some known issues with large
settings for that which may or may not be impacting your results. Try 4GB
instead, just to make sure
24.11.10 02:11, Craig Ringer написав(ла):
On 11/22/2010 11:38 PM, Ivan Voras wrote:
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Vorasivo...@freebsd.org wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real
Vitalii Tymchyshyn tiv...@gmail.com wrote:
the simplest option that will make most people happy would be to
have a limit (waitable semaphore) on backends actively executing
the query.
That's very similar to the admission control policy I proposed,
except that I suggested a limit on the
On 11/22/2010 11:38 PM, Ivan Voras wrote:
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Vorasivo...@freebsd.org wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops
On 24 November 2010 01:11, Craig Ringer cr...@postnewspapers.com.au wrote:
On 11/22/2010 11:38 PM, Ivan Voras wrote:
It looks like a hack (and one which is already implemented by connection
pool software); the underlying problem should be addressed.
My (poor) understanding is that addressing
Hi Ivan,
We have the same issue on our database machines (which are 2x6
Intel(R) Xeon(R) CPU X5670 @ 2.93GHz with 24 logical cores and 144Gb
of RAM) -- they run RHEL 5. The issue occurs with our normal OLTP
workload, so it's not just pgbench.
We use pgbouncer to limit total connections to 15
Ivan Voras ivo...@freebsd.org wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling.
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Vorasivo...@freebsd.org wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops sharply
Yet another data point to confirm
Ivan Voras ivo...@freebsd.org wrote:
It looks like a hack
Not to everyone. In the referenced section, Hellerstein,
Stonebraker and Hamilton say:
any good multi-user system has an admission control policy
In the case of PostgreSQL I understand the counter-argument,
although I'm inclined
This is not a request for help but a report, in case it helps developers
or someone in the future. The setup is:
AMD64 machine, 24 GB RAM, 2x6-core Xeon CPU + HTT (24 logical CPUs)
FreeBSD 8.1-stable, AMD64
PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale
factor of 500 (7.5
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling. :-)
-Kevin
--
Sent via pgsql-performance mailing list
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling. :-)
I agree, connection pooling will
On Sun, Nov 21, 2010 at 9:18 PM, Ivan Voras ivo...@freebsd.org wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
real cores in the system), the performance drops sharply
Yet another data point to confirm the
33 matches
Mail list logo