Re: Lock contention high

2021-10-13 Thread Laurenz Albe
On Thu, 2021-10-14 at 11:33 +0530, Ashkil Dighin wrote:
> Captured the concurrent session with Netsat and pg-stat-actvity. Is the 
> procedure the right way to capture concurrent sesssions in postgresql?
> 
> Select pg_stat_activity 

[some two dozen sessions]

That doesn't look like you would get into trouble just from the
sheer number of sessions, so it must be something else.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Lock contention high

2021-10-13 Thread Peter Geoghegan
On Wed, Oct 13, 2021 at 6:54 PM Jeremy Schneider
 wrote:
> only a half GB memory for autovac? (it will have a mandatory run as soon
> as you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)

While anti-wraparound vacuums will become a problem for TPC-C (unless
you tune for it), it's not too sensitive to mwm. You just don't end up
accumulating too many TIDs to delete from indexes in practice, even
though the overhead from VACUUM is a concern. The new autovacuum
instrumentation in Postgres 14 makes this far clearer.

-- 
Peter Geoghegan




Re: Lock contention high

2021-10-13 Thread Jeremy Schneider
Your settings are interesting, I'm curious what the goal is for this
particular hammerdb exercise.

A few comments inline


On 10/12/21 00:35, Ashkil Dighin wrote:
> 
> Postgres.conf used  in Baremetal
> 
> maintenance_work_mem = 512MB           

only a half GB memory for autovac? (it will have a mandatory run as soon
as you hit 200 mil XIDs, seems like you'd want the full max 1GB for it)

> synchronous_commit = off            
> checkpoint_timeout = 1h         
> checkpoint_completion_target = 1       
> checkpoint_warning = 0         

curious about this, seems you're just looking to understand how much
throughput you can get with a config that would not be used on a real system

> autovacuum = off                       

i assume you understand that autovacuum will still run when you hit 200
mil XIDs. this setting seems incongruent with the previous settings,
because it seemed like you were going for throughput, which generally
requires autovacuum to be more aggressive rather than less aggressive.
assuming the benchmark runs for a properly sufficient length of time,
this setting will slow things down because of accumulating bloat.

Just a few opinions, I might be wrong, hope the feedback is helpful.  :)

-Jeremy


-- 
http://about.me/jeremy_schneider




Re: Lock contention high

2021-10-13 Thread Peter Geoghegan
On Tue, Oct 12, 2021 at 12:45 AM Ashkil Dighin  wrote:
> Lock contention observed high in PostgreSQLv13.3
> The source code compiled with GNC(GCCv11.x)
> PostgreSQL version: 13.3
> Operating system:   RHEL8.3
> Kernel name:4.18.0-305.10.2.el8_4.x86_64
> RAM Size:512GB
> SSD: 1TB
> The environment used IBM metal and test benchmark environment HammerDbv4.2
> Test case :TPC-C

You didn't say how many TPC-C warehouses you used. In my experience,
people sometimes run TPC-C with relatively few, which will tend to
result in extreme contention on certain B-Tree leaf pages. (My
experiences are with BenchmarkSQL, but I can't imagine HammerDB is too
much different.)

Assuming that's the case here, for you, then it's not clear that you
have a real problem. You're really not supposed to run the benchmark
in that way, per the TPC-C spec, which strictly limits the number of
transactions per minute per warehouse -- for better or worse, valid
results generally require that you use lots of warehouses to get a
very large database (think terabytes). If you run the benchmark with
100 warehouses or less, on a big server, then the contention you'll
see will be out of all proportion to what you're ever likely to see in
the real world.

-- 
Peter Geoghegan




Re: Lock contention high

2021-10-13 Thread Paul Friedman
Are you using PostGIS?

If so, there is an issue with TOAST table locking having these symptoms.


---Paul


On Wed, Oct 13, 2021 at 11:15 AM MichaelDBA  wrote:

> 1.Is there a way to tune the lock contention ?
> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf
>
> I think you'd want to find *which* LW locks are being waited on, to see if 
> it's
> something that can be easily tuned.
>
> You can check pg_stat_activity, or maybe create a cronjob to record its 
> content
> for later analysis.
>
>
> Hello,
>
> Also turn on log_lock_waits so you can evaluate the actual SQL causing
> the problems in the PG log files.  Thinking ahead, you may want to consider
> if using advisory locks from the application side of things might be
> helpful to manage locks in a more pessimistic way.  Also, join with
> pg_locks table to find out the specific resources that are in contention.
>
> Regards,
> Michael Vitale
>
>
>


Re: Lock contention high

2021-10-13 Thread MichaelDBA

1.Is there a way to tune the lock contention ?
2.Is any recommendations to tune/reduce the lock contention via postgres.conf

I think you'd want to find *which* LW locks are being waited on, to see if it's
something that can be easily tuned.

You can check pg_stat_activity, or maybe create a cronjob to record its content
for later analysis.


Hello,

Also turn on log_lock_waits so you can evaluate the actual SQL causing 
the problems in the PG log files.  Thinking ahead, you may want to 
consider if using advisory locks from the application side of things 
might be helpful to manage locks in a more pessimistic way.  Also, join 
with pg_locks table to find out the specific resources that are in 
contention.


Regards,
Michael Vitale




Re: Lock contention high

2021-10-13 Thread Justin Pryzby
On Tue, Oct 12, 2021 at 01:05:12PM +0530, Ashkil Dighin wrote:
> Hi,
> Lock contention observed high in PostgreSQLv13.3
> The source code compiled with GNC(GCCv11.x)
> PostgreSQL version: 13.3
> Operating system:   RHEL8.3
> Kernel name:4.18.0-305.10.2.el8_4.x86_64
> RAM Size:512GB
> SSD: 1TB
> The environment used IBM metal and test benchmark environment HammerDbv4.2
> Test case :TPC-C
> 
> Perf data for 24vu(TPC-C)
> 
> 
>   18.99%  postgres  postgres[.] LWLockAcquire
>  7.09%  postgres  postgres[.] _bt_compare
>  8.66%  postgres  postgres[.] LWLockRelease
...
> 1.Is there a way to tune the lock contention ?
> 2.Is any recommendations to tune/reduce the lock contention via postgres.conf

I think you'd want to find *which* LW locks are being waited on, to see if it's
something that can be easily tuned.

You can check pg_stat_activity, or maybe create a cronjob to record its content
for later analysis.

-- 
Justin