2010/12/8 Tom Lane :
> Robert Haas writes:
>> 2010/12/8 Tom Lane :
>>> Now, it's possible that you could avoid *ever* needing to search for a
>>> specific PROCLOCK, in which case eliminating the hash calculation
>>> overhead might be worth it.
>
>> That seems like it might be feasible. The backen
Robert Haas writes:
> 2010/12/8 Tom Lane :
>> Now, it's possible that you could avoid *ever* needing to search for a
>> specific PROCLOCK, in which case eliminating the hash calculation
>> overhead might be worth it.
> That seems like it might be feasible. The backend that holds the lock
> ought
2010/12/8 Tom Lane :
> Robert Haas writes:
>>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps
>>> rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a
>>> hash table anyway - if we know the PROC and LOCK we can surely look up
>>> the PROCLOCK pretty expensively
Robert Haas writes:
>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps
>> rightly). And I'm not really sure why the PROCLOCKs need to be in a
>> hash table anyway - if we know the PROC and LOCK we can surely look up
>> the PROCLOCK pretty expensively by following the PROC SHM_
2010/12/7 Robert Haas :
> 2010/12/7 Віталій Тимчишин :
>> As far as I can see from the source, there is a lot of code executed under
>> the partition lock protection, like two hash searches (and possibly
>> allocations).
>
> Yeah, that was my concern, too, though Tom seems skeptical (perhaps
> righ
2010/12/7 Віталій Тимчишин :
> As far as I can see from the source, there is a lot of code executed under
> the partition lock protection, like two hash searches (and possibly
> allocations).
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). And I'm not really sure why
2010/12/7 Віталій Тимчишин :
>
>
> 2010/12/7 Robert Haas
>>
>> On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras wrote:
>>
>> > I'm not very familiar with PostgreSQL code but if we're
>> > brainstorming... if you're only trying to protect against a small
>> > number of expensive operations (like DROP, e
2010/12/7 Robert Haas
> On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras wrote:
>
> > I'm not very familiar with PostgreSQL code but if we're
> > brainstorming... if you're only trying to protect against a small
> > number of expensive operations (like DROP, etc.) that don't really
> > happen often, w
On 7 December 2010 19:10, Robert Haas wrote:
>> I'm not very familiar with PostgreSQL code but if we're
>> brainstorming... if you're only trying to protect against a small
>> number of expensive operations (like DROP, etc.) that don't really
>> happen often, wouldn't an atomic reference counter
On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras wrote:
> On 7 December 2010 18:37, Robert Haas wrote:
>> On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah wrote:
>>> That's exactly what I concluded when I was doing the sysbench simple
>>> read-only test. I had also tried with different lock partitions and
On Tue, Dec 7, 2010 at 12:50 PM, Tom Lane wrote:
> Robert Haas writes:
>> I wonder if it would be possible to have a very short critical section
>> where we grab the partition lock, acquire the heavyweight lock, and
>> release the partition lock; and then only as a second step record (in
>> the f
On 7 December 2010 18:37, Robert Haas wrote:
> On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah wrote:
>> That's exactly what I concluded when I was doing the sysbench simple
>> read-only test. I had also tried with different lock partitions and it
>> did not help since they all go after the same tab
Hi Tom
I suspect I may be missing something here, but I think it's a pretty
universal truism that cache lines are aligned to power-of-2 memory
addresses, so it would suffice to ensure during setup that the lower order n
bits of the object address are all zeros for each critical object; if the
mall
Robert Haas writes:
> I wonder if it would be possible to have a very short critical section
> where we grab the partition lock, acquire the heavyweight lock, and
> release the partition lock; and then only as a second step record (in
> the form of a PROCLOCK) the fact that we got it.
[ confused.
On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah wrote:
> That's exactly what I concluded when I was doing the sysbench simple
> read-only test. I had also tried with different lock partitions and it
> did not help since they all go after the same table. I think one way
> to kind of avoid the problem
On Tue, Dec 7, 2010 at 10:59 AM, Jignesh Shah wrote:
> On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas wrote:
>> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras wrote:
>>> The "sbwait" part is from FreeBSD - IPC sockets, but so much blocking on
>>> semwait indicates large contention in PostgreSQL.
>>
>>
On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas wrote:
> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras wrote:
>> The "sbwait" part is from FreeBSD - IPC sockets, but so much blocking on
>> semwait indicates large contention in PostgreSQL.
>
> I can reproduce this. I suspect, but cannot yet prove, that
On Mon, Dec 6, 2010 at 12:10 PM, Robert Haas wrote:
> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras wrote:
>> The "sbwait" part is from FreeBSD - IPC sockets, but so much blocking on
>> semwait indicates large contention in PostgreSQL.
>
> I can reproduce this. I suspect, but cannot yet prove, tha
On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras wrote:
> The "sbwait" part is from FreeBSD - IPC sockets, but so much blocking on
> semwait indicates large contention in PostgreSQL.
I can reproduce this. I suspect, but cannot yet prove, that this is
contention over the lock manager partition locks o
On 26 November 2010 03:00, Greg Smith wrote:
> Two suggestions to improve your results here:
>
> 1) Don't set shared_buffers to 10GB. There are some known issues with large
> settings for that which may or may not be impacting your results. Try 4GB
> instead, just to make sure you're not even o
Ivan Voras wrote:
PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale
factor of 500 (7.5 GB database)
with pgbench -S (SELECT-queries only) the performance curve is:
-c#result
433549
864864
1279491
1679887
2066957
2452576
2850406
3249491
40
On 11/22/10 18:47, Kevin Grittner wrote:
Ivan Voras wrote:
It looks like a hack
Not to everyone. In the referenced section, Hellerstein,
Stonebraker and Hamilton say:
"any good multi-user system has an admission control policy"
In the case of PostgreSQL I understand the counter-argument,
Vitalii Tymchyshyn wrote:
> the simplest option that will make most people happy would be to
> have a limit (waitable semaphore) on backends actively executing
> the query.
That's very similar to the admission control policy I proposed,
except that I suggested a limit on the number of active d
24.11.10 02:11, Craig Ringer написав(ла):
On 11/22/2010 11:38 PM, Ivan Voras wrote:
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Voras wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system)
On 24 November 2010 01:11, Craig Ringer wrote:
> On 11/22/2010 11:38 PM, Ivan Voras wrote:
>> It looks like a hack (and one which is already implemented by connection
>> pool software); the underlying problem should be addressed.
>
> My (poor) understanding is that addressing the underlying probl
On 11/22/2010 11:38 PM, Ivan Voras wrote:
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Voras wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system), the performance drops sharply
Yet anoth
Ivan Voras wrote:
> It looks like a hack
Not to everyone. In the referenced section, Hellerstein,
Stonebraker and Hamilton say:
"any good multi-user system has an admission control policy"
In the case of PostgreSQL I understand the counter-argument,
although I'm inclined to think that it'
On 11/22/10 16:26, Kevin Grittner wrote:
Ivan Voras wrote:
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system), the performance drops sharply
Yet another data point to confirm the importance o
Ivan Voras wrote:
> On 11/22/10 02:47, Kevin Grittner wrote:
>> Ivan Voras wrote:
>>
>>> After 16 clients (which is still good since there are only 12
>>> "real" cores in the system), the performance drops sharply
>>
>> Yet another data point to confirm the importance of connection
>> pooling. :
Hi Ivan,
We have the same issue on our database machines (which are 2x6
Intel(R) Xeon(R) CPU X5670 @ 2.93GHz with 24 logical cores and 144Gb
of RAM) -- they run RHEL 5. The issue occurs with our normal OLTP
workload, so it's not just pgbench.
We use pgbouncer to limit total connections to 15 (thi
On Sun, Nov 21, 2010 at 9:18 PM, Ivan Voras wrote:
> On 11/22/10 02:47, Kevin Grittner wrote:
>>
>> Ivan Voras wrote:
>>
>>> After 16 clients (which is still good since there are only 12
>>> "real" cores in the system), the performance drops sharply
>>
>> Yet another data point to confirm the imp
On 11/22/10 02:47, Kevin Grittner wrote:
Ivan Voras wrote:
After 16 clients (which is still good since there are only 12
"real" cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling. :-)
I agree, connection pooling will
Ivan Voras wrote:
> After 16 clients (which is still good since there are only 12
> "real" cores in the system), the performance drops sharply
Yet another data point to confirm the importance of connection
pooling. :-)
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@p
This is not a request for help but a report, in case it helps developers
or someone in the future. The setup is:
AMD64 machine, 24 GB RAM, 2x6-core Xeon CPU + HTT (24 logical CPUs)
FreeBSD 8.1-stable, AMD64
PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale
factor of 500 (7.5 GB
34 matches
Mail list logo