Re: [HACKERS] Speedup twophase transactions

2016-03-11 Thread Jesper Pedersen
On 01/26/2016 07:43 AM, Stas Kelvich wrote: Thanks for reviews and commit! As Simon and Andres already mentioned in this thread replay of twophase transaction is significantly slower then the same operations in normal mode. Major reason is that each state file is fsynced during replay and w

Re: [HACKERS] WAL log only necessary part of 2PC GID

2016-03-09 Thread Jesper Pedersen
On 03/08/2016 11:54 PM, Pavan Deolasee wrote: On Fri, Mar 4, 2016 at 9:16 PM, Jesper Pedersen wrote: I can confirm the marginal speed up in tps due to the new WAL size. The TWOPHASE_MAGIC constant should be changed, as the file header has changed definition, right ? Thanks for looking at

Re: [HACKERS] WAL log only necessary part of 2PC GID

2016-03-04 Thread Jesper Pedersen
On 02/29/2016 08:45 AM, Pavan Deolasee wrote: Hello Hackers, The maximum size of the GID, used as a 2PC identifier is currently defined as 200 bytes (see src/backend/access/transam/twophase.c). The actual GID used by the applications though may be much smaller than that. So IMO instead of WAL lo

Re: [HACKERS] Speedup twophase transactions

2016-01-11 Thread Jesper Pedersen
On 01/10/2016 04:15 AM, Simon Riggs wrote: One concern that come into my mind while reading updated patch is about creating extra bool field in GlobalTransactionData structure. While this improves readability, it also increases size of that structure and that size have impact on performance on sy

Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-05 Thread Jesper Pedersen
On 01/05/2016 08:04 AM, Amit Kapila wrote: I am not aware of such cases, however the reason I have kept it was for backward-compatability, but now I have removed it in the attached patch. Apart from that, I have updated the docs to reflect the changes related to new API's. xfunc.sgml: +

Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2015-12-31 Thread Jesper Pedersen
On 12/31/2015 06:36 AM, Amit Kapila wrote: Going further on this work, I have written a patch for separating the tranches for extensions. The basic idea is to expose two new API's, first to request a new tranche and second to assign a lock from that tranche. RequestAddinLWLockTranche(const char

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-12-20 Thread Jesper Pedersen
On 12/18/2015 01:16 PM, Robert Haas wrote: Is this just for informational purposes, or is this something you are looking to have committed? I originally thought the former, but now I'm wondering if I misinterpreted your intent. I have a hard time getting excited about committing something that

Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-11-10 Thread Jesper Pedersen
Hi, On 11/09/2015 05:10 PM, Andres Freund wrote: Each graph has a full initdb + pgbench -i cycle now. That looks about as we'd expect: the lock-free pinning doesn't matter and ssynchronous commit is beneficial. I think our bottlenecks in write workloads are sufficiently elsewhere that it's unl

Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-11-09 Thread Jesper Pedersen
Hi, On 11/06/2015 03:47 PM, Jesper Pedersen wrote: Did you initdb between tests? Pgbench -i? Restart the database? I didn't initdb / pgbench -i between the tests, so that it is likely it. Each graph has a full initdb + pgbench -i cycle now. I know, I have a brown paper bag some

Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-11-06 Thread Jesper Pedersen
Hi, On 11/06/2015 03:38 PM, Andres Freund wrote: While I saw an improvement for the 'synchronous_commit = on' case - there is a small regression for 'off', using -M prepared + Unix Domain Socket. If that is something that should be considered right now. What tests where you running, in which o

Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-11-06 Thread Jesper Pedersen
On 10/29/2015 01:18 PM, Alexander Korotkov wrote: We got a consensus with Andres that we should commit the CAS version first and look to other optimizations. Refactored version of atomic state patch is attached. The changes are following: 1) Macros are used for access refcount and usagecount. 2)

Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

2015-09-21 Thread Jesper Pedersen
On 09/18/2015 11:11 PM, Amit Kapila wrote: I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x RAID10 SSD (data + xlog) with Min(64,). The benefit with this patch could be seen at somewhat higher client-count as you can see in my initial mail, can you please once try with cl

Re: [HACKERS] Reducing ClogControlLock contention

2015-09-18 Thread Jesper Pedersen
On 08/31/2015 07:34 AM, Amit Kapila wrote: I have updated the patch (attached with mail) to show you what I have in mind. I havn't been able to get a successful run with _v5 using pgbench. TransactionIdSetStatusBit assumes an exclusive lock on CLogControlLock when called, but that part is re

Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

2015-09-18 Thread Jesper Pedersen
On 09/11/2015 10:31 AM, Amit Kapila wrote: Updated comments and the patch (increate_clog_bufs_v2.patch) containing the same is attached. I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x RAID10 SSD (data + xlog) with Min(64,). Kept the shared_buffers=64GB and effective_

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-16 Thread Jesper Pedersen
Hi, On 09/16/2015 12:26 PM, Andres Freund wrote: On 2015-09-16 10:37:43 -0400, Jesper Pedersen wrote: #ifdef LWLOCK_STATS lwstats->spin_delay_count += SpinLockAcquire(&lock->mutex); + + /* +* We scan the list of waiters from the back in order to find +

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-16 Thread Jesper Pedersen
On 09/16/2015 10:25 AM, Jesper Pedersen wrote: Likely from LWLOCK_STATS' own lwlock.c::print_lwlock_stats, which would make sense. Version 3 attached, which ignores entries from MainLWLockArray[0]. Best regards, Jesper *** /tmp/NTwtmh_lwlock.c 2015-09-16 10:34:02.955957192 -0400 --

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-16 Thread Jesper Pedersen
On 09/16/2015 10:13 AM, Jesper Pedersen wrote: On 09/15/2015 03:51 PM, Jesper Pedersen wrote: It would be nice to get a better sense of how *long* we block on various locks. It's hard to tell whether some other lock might be have fewer blocking events but for a much longer average dur

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-16 Thread Jesper Pedersen
On 09/15/2015 03:51 PM, Jesper Pedersen wrote: It would be nice to get a better sense of how *long* we block on various locks. It's hard to tell whether some other lock might be have fewer blocking events but for a much longer average duration. I did a run with the attached

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-15 Thread Jesper Pedersen
On 09/15/2015 03:42 PM, Robert Haas wrote: I haven't really, just the email. But it seems like a neat concept. So if I understand this correctly: 74.05% of spin delays are attributable to CLogControLock, 20.01% to ProcArrayLock, and 3.39% to XidGenLock. Incredibly, the queue length reaches the

Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-09-15 Thread Jesper Pedersen
On 09/15/2015 03:11 PM, Robert Haas wrote: If there is an interest I'll add the patch to the next CommitFest. Thanks for considering, and any feedback is most welcomed. Seems neat, but I can't understand how to read the flame graphs. X-axis is sort of "up in the air" with flame graphs -- si

[HACKERS] Additional LWLOCK_STATS statistics

2015-09-15 Thread Jesper Pedersen
Hi, I have been using the attached patch to look at how each LWLock relates to each other in various types of runs. The patch adds the following fields to a LWLOCK_STATS build: sh_acquire_max (int): The maximum shared locks in series for the lock ex_acquire_max (int): The maximum e

Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-07 Thread Jesper Pedersen
On 08/07/2015 02:03 PM, Andres Freund wrote: but you will have to use a 9.5 pgbench to see it, especially with higher client counts. Hm, you were using -P X, is that right? This bisects down to 1bc90f7a7b7441a88e2c6d4a0e9b6f9c1499ad30 - "Remove thread-emulation support from pgbench." And th

Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-07 Thread Jesper Pedersen
On 08/07/2015 11:40 AM, Robert Haas wrote: On Fri, Aug 7, 2015 at 10:30 AM, Jesper Pedersen wrote: Just thought I would post it in this thread, because this change does help on the performance numbers compared to 9.5 :) So are you saying that the performance was already worse before this

Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-07 Thread Jesper Pedersen
On 08/07/2015 10:47 AM, Amit Kapila wrote: Fyi, there is something in pgbench that has caused a testing regression - havn't tracked down what yet. Against 9.6 server (846f8c9483a8f31e45bf949db1721706a2765771) 9.6 pgbench: progress: 10.0 s, 53525.0 tps, lat 1.485 ms stddev 0.523 pro

Re: [HACKERS] Reduce ProcArrayLock contention

2015-08-07 Thread Jesper Pedersen
On 08/07/2015 12:41 AM, Amit Kapila wrote: On Thu, Aug 6, 2015 at 9:36 PM, Robert Haas wrote: OK, committed. Thank you. Fyi, there is something in pgbench that has caused a testing regression - havn't tracked down what yet. Against 9.6 server (846f8c9483a8f31e45bf949db1721706a2765771

Re: [HACKERS] RequestAddinLWLocks(int n)

2015-07-31 Thread Jesper Pedersen
On 07/31/2015 01:35 PM, Robert Haas wrote: On Fri, Jul 31, 2015 at 11:56 AM, Jesper Pedersen wrote: Currently Max(lock_addin_request, NUM_USER_DEFINED_LWLOCKS); LWLock's are added during startup for extensions. However, this presents a problem if an extension doesn't specify t

[HACKERS] RequestAddinLWLocks(int n)

2015-07-31 Thread Jesper Pedersen
Hi, Currently Max(lock_addin_request, NUM_USER_DEFINED_LWLOCKS); LWLock's are added during startup for extensions. However, this presents a problem if an extension doesn't specify the correct number of LWLock's needed, if the total number is <= 4. The attached patch requires extensions to

Re: [HACKERS] SE-PostgreSQL?

2009-07-18 Thread Jesper Pedersen
On Saturday 18 July 2009 12:31:34 Andres Freund wrote: > > 2. Apart from Kohei-san and Stephen Frost, is anybody actually > > interested in having this feature at all? > > I would definitely be interested. > +1 Best regards, Jesper -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgre

[HACKERS] Summer of Code idea

2006-04-27 Thread Jesper Pedersen
Hi. I have been thinking about this for a while and now that Google Summer of Code is coming I thought I would share this idea. The GCC people have traded their bison/flex parser with a hand written recursive-descent parser for a nice speed up. So it would be interesting to see if PostgreSQL wou

<    1   2