On 2017-10-20 11:54, Sokolov Yura wrote:
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay();
>
> The way you moved this around has the disadvantage that we now do this -
> a number of writes - even in the very
Hello,
On 2017-10-19 19:46, Andres Freund wrote:
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > + init_local_spin_delay();
>
> The way you moved this around has the disadvantage that we now do this -
> a number of writes - even in the very common case where the lwlock can
> be
On 2017-10-19 14:36:56 +0300, Sokolov Yura wrote:
> > > + init_local_spin_delay();
> >
> > The way you moved this around has the disadvantage that we now do this -
> > a number of writes - even in the very common case where the lwlock can
> > be acquired directly.
>
> Excuse me, I don't
On 2017-10-19 02:28, Andres Freund wrote:
On 2017-06-05 16:22:58 +0300, Sokolov Yura wrote:
Algorithm for LWLockWaitForVar is also refactored. New version is:
1. If lock is not held by anyone, it immediately exit.
2. Otherwise it is checked for ability to take WaitList lock, because
variable
Hi,
On 2017-10-19 03:03, Andres Freund wrote:
Hi,
On 2017-09-08 22:35:39 +0300, Sokolov Yura wrote:
/*
* Internal function that tries to atomically acquire the lwlock in
the passed
- * in mode.
+ * in mode. If it could not grab the lock, it doesn't puts proc into
wait
+ * queue.
*
-
Hi,
On 2017-09-08 22:35:39 +0300, Sokolov Yura wrote:
> From 722a8bed17f82738135264212dde7170b8c0f397 Mon Sep 17 00:00:00 2001
> From: Sokolov Yura
> Date: Mon, 29 May 2017 09:25:41 +
> Subject: [PATCH 1/6] More correct use of spinlock inside LWLockWaitListLock.
Hi,
On 2017-06-05 16:22:58 +0300, Sokolov Yura wrote:
> Patch changes the way LWLock is acquired.
>
> Before patch, lock is acquired using following procedure:
> 1. first LWLock->state is checked for ability to take a lock, and CAS
> performed (either lock could be acquired or not). And it is
On 09/11/2017 11:01 AM, Jesper Pedersen wrote:
Thanks for working on this !
Moved to "Ready for Committer".
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi,
On 09/08/2017 03:35 PM, Sokolov Yura wrote:
I'm seeing
-M prepared: Up to 11% improvement
-M prepared -S: No improvement, no regression ("noise")
-M prepared -N: Up to 12% improvement
for all runs the improvement shows up the closer you get to the number
of CPU threads, or above. Although
Hi, Jesper
Thank you for reviewing.
On 2017-09-08 18:33, Jesper Pedersen wrote:
Hi,
On 07/18/2017 01:20 PM, Sokolov Yura wrote:
I'm sending rebased version with couple of one-line tweaks.
(less skip_wait_list on shared lock, and don't update spin-stat on
aquiring)
I have been running
Hi,
On 07/18/2017 01:20 PM, Sokolov Yura wrote:
I'm sending rebased version with couple of one-line tweaks.
(less skip_wait_list on shared lock, and don't update spin-stat on
aquiring)
I have been running this patch on a 2S/28C/56T/256Gb w/ 2 x RAID10 SSD
setup (1 to 375 clients on logged
On 2017-07-18 20:20, Sokolov Yura wrote:
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detected.
If there is
On 2017-06-05 16:22, Sokolov Yura wrote:
Good day, everyone.
This patch improves performance of contended LWLock.
Patch makes lock acquiring in single CAS loop:
1. LWLock->state is read, and ability for lock acquiring is detected.
If there is possibility to take a lock, CAS tried.
If CAS
On Mon, Jun 5, 2017 at 9:22 AM, Sokolov Yura
wrote:
> Good day, everyone.
>
> This patch improves performance of contended LWLock.
> It was tested on 4 socket 72 core x86 server (144 HT) Centos 7.1
> gcc 4.8.5
>
> Patch makes lock acquiring in single CAS loop:
> 1.
Good day, everyone.
This patch improves performance of contended LWLock.
It was tested on 4 socket 72 core x86 server (144 HT) Centos 7.1
gcc 4.8.5
Results:
pgbench -i -s 300 + pgbench --skip-some-updates
Clients | master | patched
+=+===
30 |32k |32k
50
15 matches
Mail list logo