On Tue, Nov 11, 2014 at 3:00 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-11-11 09:29:22 +, Thom Brown wrote:
On 26 September 2014 12:40, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas robertmh...@gmail.com
wrote:
But
On 26 September 2014 12:40, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas robertmh...@gmail.com
wrote:
But this gets at another point: the way we're benchmarking this right
now, we're really conflating the effects of three different things:
On 2014-11-11 09:29:22 +, Thom Brown wrote:
On 26 September 2014 12:40, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas robertmh...@gmail.com
wrote:
But this gets at another point: the way we're benchmarking this right
now, we're really
On Tue, Oct 14, 2014 at 3:24 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Thu, Oct 9, 2014 at 6:17 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com
wrote:
On another point, I think it would be a good idea to rebase the
On Tue, Oct 14, 2014 at 3:32 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
After that I observed that contention for LW_SHARED has reduced
for this load, but it didn't help much in terms of performance, so I
again
rechecked the profile and
On Thu, Oct 9, 2014 at 6:17 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com
wrote:
On another point, I think it would be a good idea to rebase the
bgreclaimer patch over what I committed, so that we have a
clean patch
On 2014-10-14 15:24:57 +0530, Amit Kapila wrote:
After that I observed that contention for LW_SHARED has reduced
for this load, but it didn't help much in terms of performance, so I again
rechecked the profile and this time most of the contention is moved
to spinlock used in dynahash for buf
On Fri, Oct 10, 2014 at 1:08 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
I don't think OLTP really is the best test case for this. Especially not
pgbench with relatilvely small rows *and* a uniform distribution of
access.
Try
On 2014-10-10 12:28:13 +0530, Amit Kapila wrote:
On Fri, Oct 10, 2014 at 1:08 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
I don't think OLTP really is the best test case for this. Especially not
pgbench with relatilvely small rows
On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com wrote:
On another point, I think it would be a good idea to rebase the
bgreclaimer patch over what I committed, so that we have a
clean patch against master to test with.
On Thu, Oct 9, 2014 at 7:31 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com
wrote:
On another point, I think it would be a good idea to rebase the
bgreclaimer patch over
On 2014-10-09 16:01:55 +0200, Andres Freund wrote:
On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com wrote:
On another point, I think it would be a good idea to rebase the
bgreclaimer patch over what I committed, so that
On 2014-09-25 10:42:29 -0400, Robert Haas wrote:
On Thu, Sep 25, 2014 at 10:24 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund and...@2ndquadrant.com
wrote:
That leads me to wonder: Have
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com wrote:
OK.
Given that the results look good, do you plan to push this?
By this, you mean the increase in the number of buffer mapping
partitions to 128, and a corresponding increase in MAX_SIMUL_LWLOCKS?
If so, and if you
On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com wrote:
OK.
Given that the results look good, do you plan to push this?
By this, you mean the increase in the number of buffer mapping
partitions to 128, and a
On Thu, Oct 2, 2014 at 10:44 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com
wrote:
OK.
Given that the results look good, do you plan to push this?
By this, you mean
On 2014-10-02 10:56:05 -0400, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:44 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-10-02 10:40:30 -0400, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com
wrote:
OK.
Given that the results
On 10/02/2014 05:40 PM, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com wrote:
OK.
Given that the results look good, do you plan to push this?
By this, you mean the increase in the number of buffer mapping
partitions to 128, and a corresponding
On 2014-10-02 20:04:58 +0300, Heikki Linnakangas wrote:
On 10/02/2014 05:40 PM, Robert Haas wrote:
On Thu, Oct 2, 2014 at 10:36 AM, Andres Freund and...@2ndquadrant.com
wrote:
OK.
Given that the results look good, do you plan to push this?
By this, you mean the increase in the number of
On Thu, Oct 2, 2014 at 1:07 PM, Andres Freund and...@2ndquadrant.com wrote:
Do a make check-world and it'll hopefully fail ;). Check
pg_buffercache_pages.c.
Yep. Committed, with an update to the comments in lwlock.c to allude
to the pg_buffercache issue.
--
Robert Haas
EnterpriseDB:
On 2014-10-01 20:54:39 +0200, Andres Freund wrote:
Here we go.
Postgres was configured with.
-c shared_buffers=8GB \
-c log_line_prefix=[%m %p] \
-c log_min_messages=debug1 \
-p 5440 \
-c checkpoint_segments=600
-c max_connections=200
Robert reminded me that I missed to report
Part of this patch was already committed, and the overall patch has had
its fair share of review for this commitfest, so I'm marking this as
Returned with feedback. The benchmark results for the bgreclaimer
showed a fairly small improvement, so it doesn't seem like anyone's
going to commit the
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas robertmh...@gmail.com
wrote:
But this gets at another point: the way we're benchmarking this right
now, we're really conflating the effects of three different things:
1. Changing the locking regimen around the freelist and clocksweep.
2. Adding
On 09/25/2014 05:40 PM, Andres Freund wrote:
There's two reasons for that: a) dynahash just isn't very good and it
does a lot of things that will never be necessary for these hashes. b)
the key into the hash table is*far* too wide. A significant portion of
the time is spent comparing
On 2014-09-26 15:04:54 +0300, Heikki Linnakangas wrote:
On 09/25/2014 05:40 PM, Andres Freund wrote:
There's two reasons for that: a) dynahash just isn't very good and it
does a lot of things that will never be necessary for these hashes. b)
the key into the hash table is*far* too wide. A
On Fri, Sep 26, 2014 at 7:40 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
First of all thanks for committing part-1 of this changes and it
seems you are planing to commit part-3 based on results of tests
which Andres is planing to do and for remaining part (part-2), today
I have tried some
On 09/26/2014 03:26 PM, Andres Freund wrote:
On 2014-09-26 15:04:54 +0300, Heikki Linnakangas wrote:
On 09/25/2014 05:40 PM, Andres Freund wrote:
There's two reasons for that: a) dynahash just isn't very good and it
does a lot of things that will never be necessary for these hashes. b)
the key
On Fri, Sep 26, 2014 at 8:26 AM, Andres Freund and...@2ndquadrant.com wrote:
Neither, really. The hash calculation is visible in the profile, but not
that pronounced yet. The primary thing noticeable in profiles (besides
cache efficiency) is the comparison of the full tag after locating a
On Fri, Sep 26, 2014 at 3:26 PM, Andres Freund and...@2ndquadrant.com wrote:
Neither, really. The hash calculation is visible in the profile, but not
that pronounced yet. The primary thing noticeable in profiles (besides
cache efficiency) is the comparison of the full tag after locating a
On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Sep 26, 2014 at 7:40 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
First of all thanks for committing part-1 of this changes and it
seems you are planing to commit part-3 based on results of tests
which
On 2014-09-26 17:01:52 +0300, Ants Aasma wrote:
On Fri, Sep 26, 2014 at 3:26 PM, Andres Freund and...@2ndquadrant.com wrote:
Neither, really. The hash calculation is visible in the profile, but not
that pronounced yet. The primary thing noticeable in profiles (besides
cache efficiency) is
On 2014-09-26 09:59:41 -0400, Robert Haas wrote:
On Fri, Sep 26, 2014 at 8:26 AM, Andres Freund and...@2ndquadrant.com wrote:
Neither, really. The hash calculation is visible in the profile, but not
that pronounced yet. The primary thing noticeable in profiles (besides
cache efficiency) is
On Tue, Sep 23, 2014 at 5:50 PM, Robert Haas robertmh...@gmail.com wrote:
The patch I attached the first time was just the last commit in the
git repository where I wrote the patch, rather than the changes that I
made on top of that commit. So, yes, the results from the previous
message are
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
1. To see the effect of reduce-replacement-locking.patch, compare the
first TPS number in each line to the third, or the second to the
fourth. At scale factor 1000, the patch wins in all of the cases with
32 or more
On Thu, Sep 25, 2014 at 10:02 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
1. To see the effect of reduce-replacement-locking.patch, compare the
first TPS number in each line to the third, or the second to the
fourth.
On 2014-09-25 09:51:17 -0400, Robert Haas wrote:
On Tue, Sep 23, 2014 at 5:50 PM, Robert Haas robertmh...@gmail.com wrote:
The patch I attached the first time was just the last commit in the
git repository where I wrote the patch, rather than the changes that I
made on top of that commit.
On 2014-09-25 09:02:25 -0500, Merlin Moncure wrote:
On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas robertmh...@gmail.com wrote:
1. To see the effect of reduce-replacement-locking.patch, compare the
first TPS number in each line to the third, or the second to the
fourth. At scale factor 1000,
On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund and...@2ndquadrant.com wrote:
That leads me to wonder: Have you measured different, lower, number of
buffer mapping locks? 128 locks is, if we'd as we should align them
properly, 8KB of memory. Common L1 cache sizes are around 32k...
Amit has
On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund and...@2ndquadrant.com
wrote:
That leads me to wonder: Have you measured different, lower, number of
buffer mapping locks? 128 locks is, if we'd as we should align them
properly, 8KB of
On 2014-09-25 10:09:30 -0400, Robert Haas wrote:
I think the long-term solution here is that we need a lock-free hash
table implementation for our buffer mapping tables, because I'm pretty
sure that just cranking the number of locks up and up is going to
start to have unpleasant side effects
On Thu, Sep 25, 2014 at 9:14 AM, Andres Freund and...@2ndquadrant.com wrote:
Why stop at 128 mapping locks? Theoretical downsides to having more
mapping locks have been mentioned a few times but has this ever been
measured? I'm starting to wonder if the # mapping locks should be
dependent
On Thu, Sep 25, 2014 at 10:24 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-09-25 10:22:47 -0400, Robert Haas wrote:
On Thu, Sep 25, 2014 at 10:14 AM, Andres Freund and...@2ndquadrant.com
wrote:
That leads me to wonder: Have you measured different, lower, number of
buffer mapping
On 2014-09-25 09:34:57 -0500, Merlin Moncure wrote:
On Thu, Sep 25, 2014 at 9:14 AM, Andres Freund and...@2ndquadrant.com wrote:
Why stop at 128 mapping locks? Theoretical downsides to having more
mapping locks have been mentioned a few times but has this ever been
measured? I'm starting
On Fri, Sep 19, 2014 at 7:21 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
Specific numbers of both the configurations for which I have
posted data in previous mail are as follows:
Scale Factor - 800
Shared_Buffers - 12286MB (Total db size is 12288MB)
Client and Thread Count = 64
Hi,
On 2014-09-23 10:31:24 -0400, Robert Haas wrote:
I suggest we count these things:
1. The number of buffers the reclaimer has put back on the free list.
2. The number of times a backend has run the clocksweep.
3. The number of buffers past which the reclaimer has advanced the clock
On Tue, Sep 23, 2014 at 10:31 AM, Robert Haas robertmh...@gmail.com wrote:
[ review ]
Oh, by the way, I noticed that this patch breaks pg_buffercache. If
we're going to have 128 lock partitions, we need to bump
MAX_SIMUL_LWLOCKS.
But this gets at another point: the way we're benchmarking this
On Tue, Sep 23, 2014 at 10:55 AM, Robert Haas robertmh...@gmail.com wrote:
But this gets at another point: the way we're benchmarking this right
now, we're really conflating the effects of three different things:
1. Changing the locking regimen around the freelist and clocksweep.
2. Adding a
On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas robertmh...@gmail.com wrote:
I did some more experimentation on this. Attached is a patch that
JUST does #1, and, ...
...and that was the wrong patch. Thanks to Heikki for point that out.
Second try.
--
Robert Haas
EnterpriseDB:
Robert Haas robertmh...@gmail.com writes:
On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas robertmh...@gmail.com wrote:
I did some more experimentation on this. Attached is a patch that
JUST does #1, and, ...
...and that was the wrong patch. Thanks to Heikki for point that out.
Second try.
On Tue, Sep 23, 2014 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Tue, Sep 23, 2014 at 4:29 PM, Robert Haas robertmh...@gmail.com wrote:
I did some more experimentation on this. Attached is a patch that
JUST does #1, and, ...
...and that was
On 9/23/14, 10:31 AM, Robert Haas wrote:
I suggest we count these things:
1. The number of buffers the reclaimer has put back on the free list.
2. The number of times a backend has run the clocksweep.
3. The number of buffers past which the reclaimer has advanced the
clock sweep (i.e. the
On 2014-09-23 16:29:16 -0400, Robert Haas wrote:
On Tue, Sep 23, 2014 at 10:55 AM, Robert Haas robertmh...@gmail.com wrote:
But this gets at another point: the way we're benchmarking this right
now, we're really conflating the effects of three different things:
1. Changing the locking
On Tue, Sep 23, 2014 at 6:02 PM, Gregory Smith gregsmithpg...@gmail.com wrote:
On 9/23/14, 10:31 AM, Robert Haas wrote:
I suggest we count these things:
1. The number of buffers the reclaimer has put back on the free list.
2. The number of times a backend has run the clocksweep.
3. The
On Tue, Sep 23, 2014 at 6:54 PM, Andres Freund and...@2ndquadrant.com wrote:
Am I understanding you correctly that you also measured context switches
for spinlocks? If so, I don't think that's a valid comparison. LWLocks
explicitly yield the CPU as soon as there's any contention while
On 2014-09-23 19:21:10 -0400, Robert Haas wrote:
On Tue, Sep 23, 2014 at 6:54 PM, Andres Freund and...@2ndquadrant.com wrote:
I think it might be possible to construct some cases where the spinlock
performs worse than the lwlock. But I think those will be clearly in the
minority. And at
On Tue, Sep 23, 2014 at 7:42 PM, Andres Freund and...@2ndquadrant.com wrote:
It will actually be far worse than that, because we'll acquire and
release the spinlock for every buffer over which we advance the clock
sweep, instead of just once for the whole thing.
I said double, because we
On 9/23/14, 7:13 PM, Robert Haas wrote:
I think we expose far too little information in our system views. Just
to take one example, we expose no useful information about lwlock
acquire or release, but a lot of real-world performance problems are
caused by lwlock contention.
I sent over a
On Mon, Sep 22, 2014 at 10:43 AM, Gregory Smith gregsmithpg...@gmail.com
wrote:
On 9/16/14, 8:18 AM, Amit Kapila wrote:
I think the main reason for slight difference is that
when the size of shared buffers is almost same as data size, the number
of buffers it needs from clock sweep are very
On 9/16/14, 8:18 AM, Amit Kapila wrote:
I think the main reason for slight difference is that
when the size of shared buffers is almost same as data size, the number
of buffers it needs from clock sweep are very less, as an example in first
case (when size of shared buffers is 12286MB), it
On Tue, Sep 16, 2014 at 10:21 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Sep 16, 2014 at 8:18 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
In most cases performance with patch is slightly less as compare
to HEAD and the difference is generally less than 1% and in a case
or 2
On Sun, Sep 14, 2014 at 12:23 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Fri, Sep 12, 2014 at 11:55 AM, Amit Chapel amit.kapil...@gmail.com
wrote:
On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
I will
On Tue, Sep 16, 2014 at 8:18 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
In most cases performance with patch is slightly less as compare
to HEAD and the difference is generally less than 1% and in a case
or 2 close to 2%. I think the main reason for slight difference is that
when the size
On Fri, Sep 12, 2014 at 11:55 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
+++ b/src/backend/postmaster/bgreclaimer.c
A fair number of comments in that file
On Fri, Sep 12, 2014 at 11:09 PM, Gregory Smith gregsmithpg...@gmail.com
wrote:
This looks like it's squashed one of the very fundamental buffer
scaling issues though; well done Amit.
Thanks.
I'll go back to my notes and try to recreate the pathological cases
that plagued both the 8.3 BGW
On 14/09/14 19:00, Amit Kapila wrote:
On Fri, Sep 12, 2014 at 11:09 PM, Gregory Smith
gregsmithpg...@gmail.com mailto:gregsmithpg...@gmail.com wrote:
This looks like it's squashed one of the very fundamental buffer
scaling issues though; well done Amit.
Thanks.
I'll go back to my notes
On Thu, Sep 11, 2014 at 4:31 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
+++ b/src/backend/postmaster/bgreclaimer.c
A fair number of comments in that file refer to bgwriter...
Will fix.
@@ -0,0 +1,302 @@
On Thu, Sep 11, 2014 at 4:22 PM, Andres Freund and...@2ndquadrant.com wrote:
Hm. Perhaps we should do a bufHdr-refcount != zero check without
locking here? The atomic op will transfer the cacheline exclusively to
the reclaimer's CPU. Even though it very shortly afterwards will be
touched
On 2014-09-12 12:38:48 +0300, Ants Aasma wrote:
On Thu, Sep 11, 2014 at 4:22 PM, Andres Freund and...@2ndquadrant.com wrote:
Hm. Perhaps we should do a bufHdr-refcount != zero check without
locking here? The atomic op will transfer the cacheline exclusively to
the reclaimer's CPU. Even
On 9/11/14, 7:01 AM, Andres Freund wrote:
I'm not convinced that these changes can be made without also changing
the bgwriter logic. Have you measured whether there are differences in
how effective the bgwriter is? Not that it's very effective right now :)
The current background writer tuning
Hi,
On 2014-09-10 12:17:34 +0530, Amit Kapila wrote:
include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/postmaster/bgreclaimer.c
b/src/backend/postmaster/bgreclaimer.c
new file mode 100644
index 000..3df2337
--- /dev/null
+++ b/src/backend/postmaster/bgreclaimer.c
Thanks for reviewing, Andres.
On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund and...@2ndquadrant.com wrote:
+static void bgreclaim_quickdie(SIGNAL_ARGS);
+static void BgreclaimSigHupHandler(SIGNAL_ARGS);
+static void ReqShutdownHandler(SIGNAL_ARGS);
+static void
On Thu, Sep 11, 2014 at 6:32 PM, Robert Haas robertmh...@gmail.com wrote:
Thanks for reviewing, Andres.
On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund and...@2ndquadrant.com
wrote:
+static void bgreclaim_quickdie(SIGNAL_ARGS);
+static void BgreclaimSigHupHandler(SIGNAL_ARGS);
+static
On 2014-09-11 09:02:34 -0400, Robert Haas wrote:
Thanks for reviewing, Andres.
On Thu, Sep 11, 2014 at 7:01 AM, Andres Freund and...@2ndquadrant.com wrote:
+static void bgreclaim_quickdie(SIGNAL_ARGS);
+static void BgreclaimSigHupHandler(SIGNAL_ARGS);
+static void
We really need a more centralized way to handle error cleanup in
auxiliary processes. The current state of affairs is really pretty
helter-skelter. But for this patch, I think we should aim to mimic
the existing style, as ugly as it is. I'm not sure whether Amit's got
the logic
On Thu, Sep 11, 2014 at 6:59 PM, Andres Freund and...@2ndquadrant.com
wrote:
We really need a more centralized way to handle error cleanup in
auxiliary processes. The current state of affairs is really pretty
helter-skelter. But for this patch, I think we should aim to mimic
the
On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund and...@2ndquadrant.com wrote:
It's exactly the same as what bgwriter.c does.
So what? There's no code in common, so I see no reason to have one
signal handler using underscores and the next one camelcase names.
/me shrugs.
It's not always
On 2014-09-11 09:48:10 -0400, Robert Haas wrote:
On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund and...@2ndquadrant.com wrote:
I wonder if we should recheck the number of freelist items before
sleeping. As the latch currently is reset before sleeping (IIRC) we
might miss being woken up soon.
On Thu, Sep 11, 2014 at 10:03 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-09-11 09:48:10 -0400, Robert Haas wrote:
On Thu, Sep 11, 2014 at 9:22 AM, Andres Freund and...@2ndquadrant.com
wrote:
I wonder if we should recheck the number of freelist items before
sleeping. As the
On Tue, Sep 9, 2014 at 12:16 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Apart from above, I think for this patch, cat version bump is
On Wed, Sep 10, 2014 at 5:46 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
On 05/09/14 23:50, Amit Kapila wrote:
On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood
FWIW below are some test results on the 60 core beast with this patch
applied to 9.4. I'll need to do more runs to iron
On 10/09/14 18:54, Amit Kapila wrote:
On Wed, Sep 10, 2014 at 5:46 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz mailto:mark.kirkw...@catalyst.net.nz
wrote:
In terms of the effect of the patch - looks pretty similar to the
scale 2000 results for read-write, but read-only is a different and
On Tue, Sep 9, 2014 at 3:11 AM, Thom Brown t...@linux.com wrote:
On 5 September 2014 14:19, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Apart from above, I think for this patch, cat version bump is required
as I have
On 05/09/14 23:50, Amit Kapila wrote:
On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz mailto:mark.kirkw...@catalyst.net.nz
wrote:
On 04/09/14 14:42, Amit Kapila wrote:
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz
On Tue, Sep 9, 2014 at 3:46 AM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila amit.kapil...@gmail.com wrote:
One regression failed on linux due to spacing issue which is
fixed in attached patch.
I just read the latest patch by curiosity, wouldn't it make
On Fri, Sep 5, 2014 at 6:47 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Client Count/Patch_Ver (tps) 8 16 32 64 128
HEAD 58614 107370 140717 104357 65010
Patch 60092 113564 165014 213848 216065
This data is median of 3 runs, detailed report is attached with mail.
I have not repeated the
On Fri, Sep 5, 2014 at 9:19 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Apart from above, I think for this patch, cat version bump is required
as I have modified system catalog. However I have not done the
same in
On 5 September 2014 14:19, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Sep 5, 2014 at 5:17 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Apart from above, I think for this patch, cat version bump is required
as I have modified system catalog. However I have not done the
same in
On Mon, Sep 8, 2014 at 10:12 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Sep 5, 2014 at 6:47 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
Client Count/Patch_Ver (tps) 8 16 32 64 128
HEAD 58614 107370 140717 104357 65010
Patch 60092 113564 165014 213848 216065
This data is
On Wed, Sep 3, 2014 at 1:45 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Aug 28, 2014 at 7:11 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have updated the patch to address the feedback. Main changes are:
1. For populating freelist, have a separate process (bgreclaimer)
On Fri, Sep 5, 2014 at 8:42 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz
wrote:
On 04/09/14 14:42, Amit Kapila wrote:
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz
wrote:
Hi Amit,
Results look pretty good. Does it help in the read-write case too?
On Wed, Sep 3, 2014 at 8:03 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
+while (tmp_num_to_free 0)
I am not sure it's a good idea for this value to be fixed at loop
start and then just decremented.
It is
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Performance Data:
---
Configuration and Db Details
IBM POWER-7 16 cores, 64 hardware threads
RAM = 64GB
Database Locale =C
checkpoint_segments=256
checkpoint_timeout=15min
On Thu, Sep 4, 2014 at 7:25 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Its not difficult to handle such cases, but it can have downside also
for the cases where demand from backends is not high.
Consider in above case if instead of 500 more allocations, it just
does 5 more allocations,
Robert Haas robertmh...@gmail.com wrote:
On Thu, Sep 4, 2014 at 7:25 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Its not difficult to handle such cases, but it can have downside also
for the cases where demand from backends is not high.
Consider in above case if instead of 500 more
Robert Haas wrote:
On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila amit.kapil...@gmail.com wrote:
+Background Reclaimer's Processing
+-
I suggest titling this section Background Reclaim.
I don't mind changing it, but currently used title is based on similar
On 04/09/14 14:42, Amit Kapila wrote:
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz
wrote:
Hi Amit,
Results look pretty good. Does it help in the read-write case too?
Last time I ran the tpc-b test of pgbench (results of which are
posted earlier in this
On Wed, Sep 3, 2014 at 1:45 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Aug 28, 2014 at 7:11 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have updated the patch to address the feedback. Main changes are:
1. For populating freelist, have a separate process (bgreclaimer)
On Wed, Sep 3, 2014 at 7:27 AM, Amit Kapila amit.kapil...@gmail.com wrote:
+Background Reclaimer's Processing
+-
I suggest titling this section Background Reclaim.
I don't mind changing it, but currently used title is based on similar
title Background
On 03/09/14 16:22, Amit Kapila wrote:
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Aug 28, 2014 at 4:41 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
I have yet to collect data under varying loads, however I have
collected performance data for 8GB
On Thu, Sep 4, 2014 at 8:00 AM, Mark Kirkwood mark.kirkw...@catalyst.net.nz
wrote:
Hi Amit,
Results look pretty good. Does it help in the read-write case too?
Last time I ran the tpc-b test of pgbench (results of which are
posted earlier in this thread), there doesn't seem to be any major
1 - 100 of 125 matches
Mail list logo