On 03/04/2014 01:58 PM, Amit Kapila wrote:
On Mon, Mar 3, 2014 at 7:57 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/16/2014 01:51 PM, Amit Kapila wrote:
On Wed, Feb 5, 2014 at 5:29 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Thanks. I have to agree with Robert
On Wed, Mar 12, 2014 at 5:30 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Ok, great. Committed!
Awesome.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Tue, Mar 4, 2014 at 4:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Agreed. Amit, do you have the test setup at hand, can you check the
performance of this one more time?
Are you expecting more performance numbers than I have posted?
Is there anything more left for patch which you
On 03/03/2014 04:57 PM, Andres Freund wrote:
On 2014-03-03 16:27:05 +0200, Heikki Linnakangas wrote:
Attached is a rewritten version, which does the prefix/suffix tests directly
in heapam.c, and adds the prefix/suffix lengths directly as fields in the
WAL record. If you could take one more look
On 2014-03-04 12:43:48 +0200, Heikki Linnakangas wrote:
This ought to be tested with the new logical decoding stuff as it modified
the WAL update record format which the logical decoding stuff also relies,
but I don't know anything about that.
Hm, I think all it needs to do disable delta
On Mon, Mar 3, 2014 at 7:57 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/16/2014 01:51 PM, Amit Kapila wrote:
On Wed, Feb 5, 2014 at 5:29 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Thanks. I have to agree with Robert though that using the pglz encoding when
we're
On 02/16/2014 01:51 PM, Amit Kapila wrote:
On Wed, Feb 5, 2014 at 5:29 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I'm pretty sure the overhead of that would be negligible, so we could always
enable it. There are certainly a lot of scenarios where prefix/suffix
detection alone
On 2014-03-03 16:27:05 +0200, Heikki Linnakangas wrote:
Thanks. I have to agree with Robert though that using the pglz encoding when
we're just checking for a common prefix/suffix is a pretty crappy way of
going about it [1].
As the patch stands, it includes the NULL bitmap when checking for
On 2014-03-03 10:35:03 -0500, Robert Haas wrote:
On Mon, Mar 3, 2014 at 9:57 AM, Andres Freund and...@2ndquadrant.com wrote:
Hm, I think all it needs to do disable delta encoding if
need_tuple_data (which is dependent on wal_level=logical).
Why does it need to do that? The logical
On Mon, Mar 3, 2014 at 9:57 AM, Andres Freund and...@2ndquadrant.com wrote:
Hm, I think all it needs to do disable delta encoding if
need_tuple_data (which is dependent on wal_level=logical).
Why does it need to do that? The logical decoding stuff should be
able to reverse out the delta
On Mon, Mar 3, 2014 at 10:38 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-03-03 10:35:03 -0500, Robert Haas wrote:
On Mon, Mar 3, 2014 at 9:57 AM, Andres Freund and...@2ndquadrant.com wrote:
Hm, I think all it needs to do disable delta encoding if
need_tuple_data (which is
Hi,
Some quick review comments:
On 2014-02-13 18:14:54 +0530, Amit Kapila wrote:
+ /*
+ * EWT can be generated for all new tuple versions created by Update
+ * operation. Currently we do it when both the old and new tuple
versions
+ * are on same page, because during
On Sat, Feb 15, 2014 at 8:21 PM, Andres Freund and...@2ndquadrant.com wrote:
Hi,
Some quick review comments:
Thanks for the review, I shall handle/reply to comments with the
updated version in which I am planing to fix a bug (right now preparing a
test to reproduce it) in this code.
Bug:
Tag
On 2014-02-15 21:01:07 +0530, Amit Kapila wrote:
More importantly I don't think doing the compression on
this level is that interesting. I know Heikki argued for it, but I think
extending the bitmap that's computed for HOT to cover all columns and
doing this on a column level sounds much
On Thu, Feb 13, 2014 at 10:20:46AM +0530, Amit Kapila wrote:
Why not *only* prefix/suffix?
To represent prefix/suffix match, we atleast need a way to tell
that the offset and len of matched bytes and then how much
is the length of unmatched bytes we have copied.
I agree that a simpler
On Wed, Feb 12, 2014 at 10:02:32AM +0530, Amit Kapila wrote:
By issue, I assume you mean to say, which compression algorithm is
best for this patch.
For this patch, currently we have 2 algorithm's for which results have been
posted. As far as I understand Heikki is pretty sure that the latest
On Wed, Feb 12, 2014 at 8:19 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Feb 12, 2014 at 10:02:32AM +0530, Amit Kapila wrote:
I think 99.9% of users are never going to adjust this so we had better
choose something we are happy to enable for effectively everyone. In my
reading,
On Thu, Feb 13, 2014 at 1:20 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Here one of the improvements which can be done is that after prefix-suffix
match, instead of going byte-by-byte copy as per LZ format we can directly
copy all the remaining part of tuple but I think that would require
On Thu, Feb 13, 2014 at 10:07 AM, Claudio Freire klaussfre...@gmail.com wrote:
On Thu, Feb 13, 2014 at 1:20 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Here one of the improvements which can be done is that after prefix-suffix
match, instead of going byte-by-byte copy as per LZ format we can
On Tue, Feb 11, 2014 at 11:37 AM, Bruce Momjian br...@momjian.us wrote:
Yes, that was my point. I though the compression of full-page images
was a huge win and that compression was pretty straight-forward, except
for the compression algorithm. If the compression algorithm issue is
resolved,
On Mon, Feb 10, 2014 at 10:02 AM, Amit Kapila amit.kapil...@gmail.com wrote:
I think if we want to change LZ format, it will be bit more work and
verification for decoding has to be done much more strenuously.
I don't think it'll be that big of a deal. And anyway, the evidence
here suggests
On Wed, Feb 5, 2014 at 10:57:57AM -0800, Peter Geoghegan wrote:
On Wed, Feb 5, 2014 at 12:50 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I think there's zero overlap. They're completely complimentary features.
It's not like normal WAL records have an irrelevant volume.
On Tue, Feb 11, 2014 at 10:07 PM, Bruce Momjian br...@momjian.us wrote:
On Wed, Feb 5, 2014 at 10:57:57AM -0800, Peter Geoghegan wrote:
On Wed, Feb 5, 2014 at 12:50 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I think there's zero overlap. They're completely complimentary features.
On Thu, Feb 6, 2014 at 5:57 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Feb 6, 2014 at 9:13 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Considering above change as correct, I have tried to see the worst
case overhead for this patch by having new tuple such that after
25% or so of
On Thu, Feb 6, 2014 at 9:13 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Feb 5, 2014 at 8:50 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/05/2014 04:48 PM, Amit Kapila wrote:
I have done one test where there is a large suffix match, but
not large enough that it can
On Wed, Feb 5, 2014 at 8:56 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Feb 5, 2014 at 5:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/05/2014 07:54 AM, Amit Kapila wrote:
That's not the worst case, by far.
First, note that the skipping while scanning new tuple
On 02/04/2014 11:58 PM, Andres Freund wrote:
On February 4, 2014 10:50:10 PM CET, Peter Geoghegan p...@heroku.com wrote:
On Tue, Feb 4, 2014 at 11:11 AM, Andres Freund and...@2ndquadrant.com
wrote:
Does this feature relate to compression of WAL page images at all?
No.
So the obvious
On 02/05/2014 07:54 AM, Amit Kapila wrote:
On Tue, Feb 4, 2014 at 11:58 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Now there is approximately 1.4~5% CPU gain for
hundred tiny fields, half nulled case
Assuming that
On 01/30/2014 08:53 AM, Amit Kapila wrote:
On Wed, Jan 29, 2014 at 8:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 01/29/2014 02:21 PM, Amit Kapila wrote:
The main reason to process in chunks as much as possible is to save
cpu cycles. For example if we build hash table
On Wed, Feb 5, 2014 at 5:29 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 01/30/2014 08:53 AM, Amit Kapila wrote:
Is it possible to do for both prefix and suffix together, basically
the question I
have in mind is what will be deciding factor for switching from hash table
mechanism
On Wed, Feb 5, 2014 at 5:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/05/2014 07:54 AM, Amit Kapila wrote:
On Tue, Feb 4, 2014 at 11:58 PM, Robert Haas robertmh...@gmail.com
wrote:
On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
Now there is
On 02/05/2014 04:48 PM, Amit Kapila wrote:
I have done one test where there is a large suffix match, but
not large enough that it can compress more than 75% of string,
the CPU overhead with wal-update-prefix-suffix-encode-1.patch is
not much, but there is no I/O reduction as well.
Hmm, it's
On Wed, Feb 5, 2014 at 8:50 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/05/2014 04:48 PM, Amit Kapila wrote:
I have done one test where there is a large suffix match, but
not large enough that it can compress more than 75% of string,
the CPU overhead with
On Wed, Feb 5, 2014 at 12:50 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
I think there's zero overlap. They're completely complimentary features.
It's not like normal WAL records have an irrelevant volume.
Correct. Compressing a full-page image happens on the first update after a
On Wed, Feb 5, 2014 at 6:59 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Attached is a quick patch for that, if you want to test it.
But if we really just want to do prefix/suffix compression, this is a
crappy and expensive way to do it. We needn't force everything
through the pglz tag
On Wed, Feb 5, 2014 at 8:50 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 02/05/2014 04:48 PM, Amit Kapila wrote:
I have done one test where there is a large suffix match, but
not large enough that it can compress more than 75% of string,
the CPU overhead with
On Wed, Feb 5, 2014 at 6:43 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
So, I came up with the attached worst case test, modified from your latest
test suite.
unpatched:
testname | wal_generated | duration
On 06/02/14 16:59, Robert Haas wrote:
On Wed, Feb 5, 2014 at 6:43 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
So, I came up with the attached worst case test, modified from your latest
test suite.
unpatched:
testname | wal_generated | duration
On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Now there is approximately 1.4~5% CPU gain for
hundred tiny fields, half nulled case
I don't want to advocate too strongly for this patch because, number
one, Amit is a colleague and more importantly, number two, I
On Tue, Feb 4, 2014 at 01:28:38PM -0500, Robert Haas wrote:
Meanwhile, in friendlier cases, like one short and one long field, no
change, we're seeing big improvements. That particular case shows a
speedup of 21% and a WAL reduction of 36%. That's a pretty big deal,
and I think not
On 2014-02-04 14:09:57 -0500, Bruce Momjian wrote:
On Tue, Feb 4, 2014 at 01:28:38PM -0500, Robert Haas wrote:
Meanwhile, in friendlier cases, like one short and one long field, no
change, we're seeing big improvements. That particular case shows a
speedup of 21% and a WAL reduction of
On Tue, Feb 4, 2014 at 08:11:18PM +0100, Andres Freund wrote:
On 2014-02-04 14:09:57 -0500, Bruce Momjian wrote:
On Tue, Feb 4, 2014 at 01:28:38PM -0500, Robert Haas wrote:
Meanwhile, in friendlier cases, like one short and one long field, no
change, we're seeing big improvements. That
On Tue, Feb 4, 2014 at 11:11 AM, Andres Freund and...@2ndquadrant.com wrote:
Does this feature relate to compression of WAL page images at all?
No.
So the obvious question is: where, if anywhere, do the two efforts
(this patch, and Fujii's patch) overlap? Does Fujii have any concerns
about
On February 4, 2014 10:50:10 PM CET, Peter Geoghegan p...@heroku.com wrote:
On Tue, Feb 4, 2014 at 11:11 AM, Andres Freund and...@2ndquadrant.com
wrote:
Does this feature relate to compression of WAL page images at all?
No.
So the obvious question is: where, if anywhere, do the two efforts
On Tue, Feb 4, 2014 at 1:58 PM, Andres Freund and...@2ndquadrant.com wrote:
I think there's zero overlap. They're completely complimentary features. It's
not like normal WAL records have an irrelevant volume.
I'd have thought so too, but I would not like to assume. Like many
people commenting
On Tue, Feb 4, 2014 at 11:58 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Now there is approximately 1.4~5% CPU gain for
hundred tiny fields, half nulled case
Assuming that the logic isn't buggy, a point in need of
On Fri, Jan 31, 2014 at 12:33 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Jan 30, 2014 at 12:23 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Jan 29, 2014 at 8:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
After basic verification of
On Thu, Jan 30, 2014 at 12:23 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Wed, Jan 29, 2014 at 8:13 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Few observations in patch (back-to-pglz-like-delta-encoding-1):
1.
+#define pgrb_hash_unroll(_p, hindex) \
+ hindex = hindex ^
On 01/28/2014 07:01 PM, Heikki Linnakangas wrote:
On 01/27/2014 07:03 PM, Amit Kapila wrote:
I have tried to improve algorithm in another way so that we can get
benefit of same chunks during find match (something similar to lz).
The main change is to consider chunks at fixed boundary (4 byte)
On Wed, Jan 29, 2014 at 3:41 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 01/28/2014 07:01 PM, Heikki Linnakangas wrote:
On 01/27/2014 07:03 PM, Amit Kapila wrote:
I have tried to improve algorithm in another way so that we can get
benefit of same chunks during find match
On 01/29/2014 02:21 PM, Amit Kapila wrote:
On Wed, Jan 29, 2014 at 3:41 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
For example, if the new tuple is an exact copy of the old tuple,
except for one additional byte in the beginning, the algorithm would fail to
recognize that. It would be
On 01/27/2014 07:03 PM, Amit Kapila wrote:
I have tried to improve algorithm in another way so that we can get
benefit of same chunks during find match (something similar to lz).
The main change is to consider chunks at fixed boundary (4 byte)
and after finding match, try to find if there is a
I think sort by string column is lower during merge join, maybe comparing
function in sort need be refined to save some cycle. It’s the hot function when
do sort.
Heikki Linnakangas hlinnakan...@vmware.com编写:
On 01/27/2014 07:03 PM, Amit Kapila wrote:
I have tried to improve algorithm in
On Mon, Jan 27, 2014 at 12:03 PM, Amit Kapila amit.kapil...@gmail.com wrote:
I think that's a good thing to try. Can you code it up?
I have tried to improve algorithm in another way so that we can get
benefit of same chunks during find match (something similar to lz).
The main change is to
On Tue, Jan 21, 2014 at 2:00 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Jan 20, 2014 at 9:49 PM, Robert Haas robertmh...@gmail.com wrote:
I ran Heikki's test suit on latest master and latest master plus
pgrb_delta_encoding_v4.patch on a PPC64 machine, but the results
didn't look
On Thu, Jan 16, 2014 at 12:07 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Okay, got your point.
Another minor thing is that in latest patch which I have sent yesterday,
I have modified it such that while formation of chunks if there is a data
at end of string which doesn't have
On Mon, Jan 20, 2014 at 9:49 PM, Robert Haas robertmh...@gmail.com wrote:
I ran Heikki's test suit on latest master and latest master plus
pgrb_delta_encoding_v4.patch on a PPC64 machine, but the results
didn't look too good. The only tests where the WAL volume changed by
more than half a
On Mon, Nov 25, 2013 at 6:55 PM, Robert Haas robertmh...@gmail.com wrote:
But even if that doesn't
pan out, I think the fallback position should not be OK, well, if we
can't get decreased I/O for free then forget it but rather OK, if we
can't get decreased I/O for free then let's get decreased
On Thu, Jan 16, 2014 at 12:07 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Jan 16, 2014 at 12:49 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 15, 2014 at 7:28 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Unpatched
---
testname
On Wed, Jan 15, 2014 at 5:58 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Fri, Jan 10, 2014 at 9:12 PM, Robert Haas robertmh...@gmail.com wrote:
Performance Data
-
Non-default settings:
autovacuum =off
checkpoint_segments =128
checkpoint_timeout = 10min
On Wed, Jan 15, 2014 at 7:28 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Unpatched
---
testname | wal_generated |
duration
--+--+--
On Thu, Jan 16, 2014 at 12:49 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jan 15, 2014 at 7:28 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Unpatched
---
testname | wal_generated |
duration
On Tue, Jan 14, 2014 at 1:16 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Tue, Jan 14, 2014 at 2:16 AM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Jan 11, 2014 at 1:08 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Yes, currently this applies to update, what I have in mind is that
On Sat, Jan 11, 2014 at 1:08 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Yes, currently this applies to update, what I have in mind is that
in future if some one wants to use WAL compression for any other
operation like 'full_page_writes', then it can be easily extendible.
To be honest, I
On Tue, Jan 14, 2014 at 2:16 AM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Jan 11, 2014 at 1:08 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Yes, currently this applies to update, what I have in mind is that
in future if some one wants to use WAL compression for any other
operation
2. Provide a new reloption to specify Wal compression
for update operation on table
Create table tbl(c1 char(100)) With (compress_wal = true);
Alternative options:
a. compress_wal can take input as operation, e.g. 'insert', 'update',
b. use alternate syntax:
On Fri, Jan 10, 2014 at 9:12 PM, Robert Haas robertmh...@gmail.com wrote:
2. Provide a new reloption to specify Wal compression
for update operation on table
Create table tbl(c1 char(100)) With (compress_wal = true);
Alternative options:
a. compress_wal can take input as
On Fri, Dec 6, 2013 at 6:41 PM, Amit Kapila amit.kapil...@gmail.com wrote:
Agreed, summarization of data for LZ/Chunkwise encoding especially for
non-compressible (hundred tiny fields, all changed/half changed) or less
compressible data (hundred tiny fields, half nulled) w.r.t CPU
On Thu, Dec 12, 2013 at 12:27 AM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Dec 12, 2013 at 3:43 AM, Peter Eisentraut pete...@gmx.net wrote:
This patch fails the regression tests; see attachment.
Thanks for reporting the diffs. The reason for failures is that
still decoding for
This patch fails the regression tests; see attachment.
***
/var/lib/jenkins/jobs/postgresql_commitfest_world/workspace/src/test/regress/expected/arrays.out
2013-08-24 01:24:42.0 +
---
/var/lib/jenkins/jobs/postgresql_commitfest_world/workspace/src/test/regress/results/arrays.out
On Thu, Dec 12, 2013 at 3:43 AM, Peter Eisentraut pete...@gmx.net wrote:
This patch fails the regression tests; see attachment.
Thanks for reporting the diffs. The reason for failures is that
still decoding for tuple is not done as mentioned in Notes section in
mail
On Fri, Dec 6, 2013 at 3:39 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:
On 06 December 2013 12:29 Amit Kapila wrote:
Note -
a. Performance is data is taken on my laptop, needs to be tested on
some better m/c b. Attached Patch is just a prototype of chunkwise
concept, code needs to
On 05 December 2013 21:16 Amit Kapila wrote:
Note -
a. Performance is data is taken on my laptop, needs to be tested on
some better m/c b. Attached Patch is just a prototype of chunkwise
concept, code needs to be improved and decode
handling/test is pending.
I ran the performance test
On Fri, Dec 6, 2013 at 12:10 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:
On 05 December 2013 21:16 Amit Kapila wrote:
Note -
a. Performance is data is taken on my laptop, needs to be tested on
some better m/c b. Attached Patch is just a prototype of chunkwise
concept, code needs to be
On Mon, Dec 2, 2013 at 7:40 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:
On 29 November 2013 03:05 Robert Haas wrote:
On Wed, Nov 27, 2013 at 9:31 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
I tried modifying the existing patch to support the dynamic rollup as follows.
For every 32
On Wed, Nov 27, 2013 at 9:31 AM, Amit Kapila amit.kapil...@gmail.com wrote:
Sure, but to explore (a), the scope is bit bigger. We have below
options to explore (a):
1. try to optimize existing algorithm as used in patch, which we have
tried but ofcourse we can spend some more time to see if
On Wed, Nov 27, 2013 at 12:56 AM, Amit Kapila amit.kapil...@gmail.com wrote:
- There is a comment TODO: It would be nice to behave like the
history and the source strings were concatenated, so that you could
compress using the new data, too. If we're not already doing that,
then how are we
On Wed, Nov 27, 2013 at 7:35 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Nov 27, 2013 at 12:56 AM, Amit Kapila amit.kapil...@gmail.com wrote:
The basic idea is that you use a rolling hash function to divide up
the history data into chunks of a given average size. So we scan the
On Tue, Nov 26, 2013 at 8:25 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jul 22, 2013 at 2:31 PM, Greg Smith g...@2ndquadrant.com wrote:
I spent a little time running the tests from Heikki's script under
perf. On all three two short fields tests and also on the ten tiny
fields, all
On Mon, Jul 22, 2013 at 2:31 PM, Greg Smith g...@2ndquadrant.com wrote:
On the Mac, the only case that seems to have a slowdown now is hundred tiny
fields, half nulled. It would be nice to understand just what is going on
with that one. I got some ugly results in two short fields, no change
On Tuesday, July 23, 2013 12:02 AM Greg Smith wrote:
The v3 patch applies perfectly here now. Attached is a spreadsheet
with test results from two platforms, a Mac laptop and a Linux server.
I used systems with high disk speed because that seemed like a worst
case for this improvement. The
On Tuesday, July 23, 2013 12:27 AM Andres Freund wrote:
On 2013-07-19 10:40:01 +0530, Hari Babu wrote:
On Friday, July 19, 2013 4:11 AM Greg Smith wrote:
On 7/9/13 12:09 AM, Amit Kapila wrote:
I think the first thing to verify is whether the results posted
can be validated in some
On 2013-07-23 18:59:11 +0530, Amit Kapila wrote:
* I'd be very surprised if this doesn't make WAL replay of update heavy
workloads slower by at least factor of 2.
Yes, if you just consider the cost of replay, but it involves other
operations as well
like for standby case
On Tuesday, July 23, 2013 7:06 PM Andres Freund wrote:
On 2013-07-23 18:59:11 +0530, Amit Kapila wrote:
* I'd be very surprised if this doesn't make WAL replay of update
heavy
workloads slower by at least factor of 2.
Yes, if you just consider the cost of replay, but it involves
The v3 patch applies perfectly here now. Attached is a spreadsheet with
test results from two platforms, a Mac laptop and a Linux server. I
used systems with high disk speed because that seemed like a worst case
for this improvement. The actual improvement for shrinking WAL should
be even
On 2013-07-19 10:40:01 +0530, Hari Babu wrote:
On Friday, July 19, 2013 4:11 AM Greg Smith wrote:
On 7/9/13 12:09 AM, Amit Kapila wrote:
I think the first thing to verify is whether the results posted can be
validated in some other environment setup by another person.
The testcase
On 7/22/13 2:57 PM, Andres Freund wrote:
* I'd be very surprised if this doesn't make WAL replay of update heavy
workloads slower by at least factor of 2.
I was thinking about what a benchmark of WAL replay would look like last
year. I don't think that data is captured very well yet, and
On 7/9/13 12:09 AM, Amit Kapila wrote:
I think the first thing to verify is whether the results posted can be
validated in some other environment setup by another person.
The testcase used is posted at below link:
http://www.postgresql.org/message-id/51366323.8070...@vmware.com
That
Greg,
* Greg Smith (g...@2ndquadrant.com) wrote:
That seems easy enough to do here, Heikki's test script is
excellent. The latest patch Hari posted on July 2 has one hunk that
doesn't apply anymore now. Inside
src/backend/utils/adt/pg_lzcompress.c the patch tries to change this
code:
-
On Friday, July 19, 2013 4:11 AM Greg Smith wrote:
On 7/9/13 12:09 AM, Amit Kapila wrote:
I think the first thing to verify is whether the results posted can be
validated in some other environment setup by another person.
The testcase used is posted at below link:
The only environment I have available at the moment is a virtual box.
That's probably not going to be very helpful for performance testing.
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
On Wednesday, July 10, 2013 6:32 AM Mike Blackwell wrote:
The only environment I have available at the moment is a virtual box. That's
probably not going to be very helpful for performance testing.
It's okay. Anyway thanks for doing the basic test for patch.
With Regards,
Amit Kapila.
I can't comment on further direction for the patch, but since it was marked
as Needs Review in the CF app I took a quick look at it.
It patches and compiles clean against the current Git HEAD, and 'make
check' runs successfully.
Does it need documentation for the GUC variable
On 07/08/2013 02:21 PM, Mike Blackwell wrote:
I can't comment on further direction for the patch, but since it was marked
as Needs Review in the CF app I took a quick look at it.
It patches and compiles clean against the current Git HEAD, and 'make
check' runs successfully.
Does it need
On Tuesday, July 09, 2013 2:52 AM Mike Blackwell wrote:
I can't comment on further direction for the patch, but since it was marked
as Needs Review in the CF app I took a quick look at it.
Thanks for looking into it.
Last time Heikki has found test scenario's where the original patch was
On 2013-03-05 23:26:59 +0200, Heikki Linnakangas wrote:
On 04.03.2013 06:39, Amit Kapila wrote:
The stats look fairly sane. I'm a little concerned about the apparent
trend of falling TPS in the patched vs original tests for the 1-client
test as record size increases, but it's only
On Wednesday, March 06, 2013 2:57 AM Heikki Linnakangas wrote:
On 04.03.2013 06:39, Amit Kapila wrote:
On Sunday, March 03, 2013 8:19 PM Craig Ringer wrote:
On 02/05/2013 11:53 PM, Amit Kapila wrote:
Performance data for the patch is attached with this mail.
Conclusions from the readings
On 02/05/2013 11:53 PM, Amit Kapila wrote:
Performance data for the patch is attached with this mail.
Conclusions from the readings (these are same as my previous patch):
1. With orignal pgbench there is a max 7% WAL reduction with not much
performance difference.
2. With 250 record pgbench
On Sunday, March 03, 2013 8:19 PM Craig Ringer wrote:
On 02/05/2013 11:53 PM, Amit Kapila wrote:
Performance data for the patch is attached with this mail.
Conclusions from the readings (these are same as my previous patch):
1. With orignal pgbench there is a max 7% WAL reduction with not
On Wednesday, January 30, 2013 8:32 PM Amit Kapila wrote:
On Tuesday, January 29, 2013 7:42 PM Amit Kapila wrote:
On Tuesday, January 29, 2013 3:53 PM Heikki Linnakangas wrote:
On 29.01.2013 11:58, Amit Kapila wrote:
Can there be another way with which current patch code can be
made
1 - 100 of 144 matches
Mail list logo