On 01/26/2016 07:43 AM, Stas Kelvich wrote:
Thanks for reviews and commit!
As Simon and Andres already mentioned in this thread replay of twophase
transaction is significantly slower then the same operations in normal mode.
Major reason is that each state file is fsynced during replay and w
On 03/08/2016 11:54 PM, Pavan Deolasee wrote:
On Fri, Mar 4, 2016 at 9:16 PM, Jesper Pedersen
wrote:
I can confirm the marginal speed up in tps due to the new WAL size.
The TWOPHASE_MAGIC constant should be changed, as the file header has
changed definition, right ?
Thanks for looking at
On 02/29/2016 08:45 AM, Pavan Deolasee wrote:
Hello Hackers,
The maximum size of the GID, used as a 2PC identifier is currently defined
as 200 bytes (see src/backend/access/transam/twophase.c). The actual GID
used by the applications though may be much smaller than that. So IMO
instead of WAL lo
On 01/10/2016 04:15 AM, Simon Riggs wrote:
One concern that come into my mind while reading updated
patch is about creating extra bool field in GlobalTransactionData
structure. While this improves readability, it
also increases size of that structure and that size have impact on
performance on sy
On 01/05/2016 08:04 AM, Amit Kapila wrote:
I am not aware of such cases, however the reason I have kept it was for
backward-compatability, but now I have removed it in the attached patch.
Apart from that, I have updated the docs to reflect the changes related
to new API's.
xfunc.sgml:
+
On 12/31/2015 06:36 AM, Amit Kapila wrote:
Going further on this work, I have written a patch for separating the
tranches for extensions. The basic idea is to expose two new API's,
first to request a new tranche and second to assign a lock from that
tranche.
RequestAddinLWLockTranche(const char
On 12/18/2015 01:16 PM, Robert Haas wrote:
Is this just for informational purposes, or is this something you are
looking to have committed? I originally thought the former, but now
I'm wondering if I misinterpreted your intent. I have a hard time
getting excited about committing something that
Hi,
On 11/09/2015 05:10 PM, Andres Freund wrote:
Each graph has a full initdb + pgbench -i cycle now.
That looks about as we'd expect: the lock-free pinning doesn't matter
and ssynchronous commit is beneficial. I think our bottlenecks in write
workloads are sufficiently elsewhere that it's unl
Hi,
On 11/06/2015 03:47 PM, Jesper Pedersen wrote:
Did you initdb between tests? Pgbench -i? Restart the database?
I didn't initdb / pgbench -i between the tests, so that it is likely it.
Each graph has a full initdb + pgbench -i cycle now.
I know, I have a brown paper bag some
Hi,
On 11/06/2015 03:38 PM, Andres Freund wrote:
While I saw an improvement for the 'synchronous_commit = on' case -
there is a small regression for 'off', using -M prepared + Unix Domain
Socket. If that is something that should be considered right now.
What tests where you running, in which o
On 10/29/2015 01:18 PM, Alexander Korotkov wrote:
We got a consensus with Andres that we should commit the CAS version first
and look to other optimizations.
Refactored version of atomic state patch is attached. The changes are
following:
1) Macros are used for access refcount and usagecount.
2)
On 09/18/2015 11:11 PM, Amit Kapila wrote:
I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x
RAID10 SSD (data + xlog) with Min(64,).
The benefit with this patch could be seen at somewhat higher
client-count as you can see in my initial mail, can you please
once try with cl
On 08/31/2015 07:34 AM, Amit Kapila wrote:
I have updated the patch (attached with mail) to show
you what I have in mind.
I havn't been able to get a successful run with _v5 using pgbench.
TransactionIdSetStatusBit assumes an exclusive lock on CLogControlLock
when called, but that part is re
On 09/11/2015 10:31 AM, Amit Kapila wrote:
Updated comments and the patch (increate_clog_bufs_v2.patch)
containing the same is attached.
I have done various runs on an Intel Xeon 28C/56T w/ 256Gb mem and 2 x
RAID10 SSD (data + xlog) with Min(64,).
Kept the shared_buffers=64GB and effective_
Hi,
On 09/16/2015 12:26 PM, Andres Freund wrote:
On 2015-09-16 10:37:43 -0400, Jesper Pedersen wrote:
#ifdef LWLOCK_STATS
lwstats->spin_delay_count += SpinLockAcquire(&lock->mutex);
+
+ /*
+* We scan the list of waiters from the back in order to find
+
On 09/16/2015 10:25 AM, Jesper Pedersen wrote:
Likely from LWLOCK_STATS' own lwlock.c::print_lwlock_stats, which would
make sense.
Version 3 attached, which ignores entries from MainLWLockArray[0].
Best regards,
Jesper
*** /tmp/NTwtmh_lwlock.c 2015-09-16 10:34:02.955957192 -0400
--
On 09/16/2015 10:13 AM, Jesper Pedersen wrote:
On 09/15/2015 03:51 PM, Jesper Pedersen wrote:
It
would be nice to get a better sense of how *long* we block on various
locks. It's hard to tell whether some other lock might be have fewer
blocking events but for a much longer average dur
On 09/15/2015 03:51 PM, Jesper Pedersen wrote:
It
would be nice to get a better sense of how *long* we block on various
locks. It's hard to tell whether some other lock might be have fewer
blocking events but for a much longer average duration.
I did a run with the attached
On 09/15/2015 03:42 PM, Robert Haas wrote:
I haven't really, just the email. But it seems like a neat concept.
So if I understand this correctly:
74.05% of spin delays are attributable to CLogControLock, 20.01% to
ProcArrayLock, and 3.39% to XidGenLock. Incredibly, the queue length
reaches the
On 09/15/2015 03:11 PM, Robert Haas wrote:
If there is an interest I'll add the patch to the next CommitFest.
Thanks for considering, and any feedback is most welcomed.
Seems neat, but I can't understand how to read the flame graphs.
X-axis is sort of "up in the air" with flame graphs -- si
Hi,
I have been using the attached patch to look at how each LWLock relates
to each other in various types of runs.
The patch adds the following fields to a LWLOCK_STATS build:
sh_acquire_max (int):
The maximum shared locks in series for the lock
ex_acquire_max (int):
The maximum e
On 08/07/2015 02:03 PM, Andres Freund wrote:
but you will have to use a 9.5 pgbench to see it, especially with higher
client counts.
Hm, you were using -P X, is that right?
This bisects down to 1bc90f7a7b7441a88e2c6d4a0e9b6f9c1499ad30 - "Remove
thread-emulation support from pgbench."
And th
On 08/07/2015 11:40 AM, Robert Haas wrote:
On Fri, Aug 7, 2015 at 10:30 AM, Jesper Pedersen
wrote:
Just thought I would post it in this thread, because this change does help
on the performance numbers compared to 9.5 :)
So are you saying that the performance was already worse before this
On 08/07/2015 10:47 AM, Amit Kapila wrote:
Fyi, there is something in pgbench that has caused a testing regression -
havn't tracked down what yet.
Against 9.6 server (846f8c9483a8f31e45bf949db1721706a2765771)
9.6 pgbench:
progress: 10.0 s, 53525.0 tps, lat 1.485 ms stddev 0.523
pro
On 08/07/2015 12:41 AM, Amit Kapila wrote:
On Thu, Aug 6, 2015 at 9:36 PM, Robert Haas wrote:
OK, committed.
Thank you.
Fyi, there is something in pgbench that has caused a testing regression
- havn't tracked down what yet.
Against 9.6 server (846f8c9483a8f31e45bf949db1721706a2765771
On 07/31/2015 01:35 PM, Robert Haas wrote:
On Fri, Jul 31, 2015 at 11:56 AM, Jesper Pedersen
wrote:
Currently
Max(lock_addin_request, NUM_USER_DEFINED_LWLOCKS);
LWLock's are added during startup for extensions.
However, this presents a problem if an extension doesn't specify t
Hi,
Currently
Max(lock_addin_request, NUM_USER_DEFINED_LWLOCKS);
LWLock's are added during startup for extensions.
However, this presents a problem if an extension doesn't specify the
correct number of LWLock's needed, if the total number is <= 4.
The attached patch requires extensions to
On Saturday 18 July 2009 12:31:34 Andres Freund wrote:
> > 2. Apart from Kohei-san and Stephen Frost, is anybody actually
> > interested in having this feature at all?
>
> I would definitely be interested.
>
+1
Best regards,
Jesper
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgre
Hi.
I have been thinking about this for a while and now that Google Summer of Code
is coming I thought I would share this idea.
The GCC people have traded their bison/flex parser with a hand written
recursive-descent parser for a nice speed up.
So it would be interesting to see if PostgreSQL wou
101 - 129 of 129 matches
Mail list logo