better I/O subsystem, so I've reran the tests with
data directory in tmpfs, but that produced almost the same results.
Of course, this observation is unrelated to this patch.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA,
Hi Amit,
On 12/13/2016 09:45 AM, Amit Langote wrote:
On 2016/12/13 0:17, Tomas Vondra wrote:
On 12/12/2016 07:37 AM, Amit Langote wrote:
Hi Tomas,
On 2016/12/12 10:02, Tomas Vondra wrote:
2) I'm wondering whether having 'table' in the catalog name (and also in
the new relkind) is too
On 12/12/2016 11:39 PM, Tomas Vondra wrote:
On 12/12/2016 05:05 AM, Petr Jelinek wrote:
I'd be happy with this patch now (as in committer ready) except that it
does have some merge conflicts after the recent commits, so rebase is
needed.
Attached is a rebased version of the patch, resolving
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
slab-allocators-v7.tgz
Description: application/compressed-tar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
On 12/12/2016 07:37 AM, Amit Langote wrote:
Hi Tomas,
On 2016/12/12 10:02, Tomas Vondra wrote:
2) I'm wondering whether having 'table' in the catalog name (and also in
the new relkind) is too limiting. I assume we'll have partitioned indexes
one day, for example - do we expect to use
artitioning patch.
I'm mentioning it here because I think the new partitioning will
hopefully get more efficient and handle large partition counts more
efficiently (the inheritance only really works for ~100 partitions,
which is probably why no one complained about OOM during UPDATEs).
good candidate).
Trying to fix this by adding more GUCs seems a bit strange to me.
>
> In general, I have a positive outlook on this patch, since it appears
> to compete well with similar implementations in other systems
> scalability-wise. It does what it's supposed to do.
>
+1 t
Dne 11/27/2016 v 11:02 PM Andres Freund napsal(a):
On 2016-11-27 22:21:49 +0100, Petr Jelinek wrote:
On 27/11/16 21:47, Andres Freund wrote:
Hi,
+typedef struct SlabBlockData *SlabBlock; /* forward reference */
+typedef struct SlabChunkData *SlabChunk;
Can we please not
On 11/27/2016 07:25 PM, Petr Jelinek wrote:
On 15/11/16 01:44, Tomas Vondra wrote:
Attached is v6 of the patch series, fixing most of the points:
* common bits (valgrind/randomization/wipe) moved to memdebug.h/c
Instead of introducing a new header file, I've added the prototypes to
memdebug.h
s put the burden on initdb to fill in
the correct value by modifying postgresql.conf.sample appropriately.
It seems like that could be done easily here too. And it'd be a
back-patchable fix.
I haven't realized initdb can do that. I agree that would be the best
solution.
--
Tomas Vondra
here. Some GUCs use -1 as "use
default value" and others using it as "disable". Picking one of those
does not really increase the confusion, and it fixes the issue of having
a default mismatching the commented-out example.
regards
--
Tomas Vondra http://
comments,
maybe something like:
#checkpoint_flush_after = ... # default is 256kB on linux, 0 otherwise
# where 0 disables flushing
Yeah, something like that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training
use "-1" to specify the default value should be used, and use
that in the sample file. This won't break any user configuration.
If that's considered not acceptable, perhaps we should at least improve
the comments, so make this clearer.
regards
--
Tomas Vondra http://
files, but those were instances with hundreds
of databases, each with many thousands of objects.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgs
On 11/21/2016 11:10 PM, Robert Haas wrote:
[ reviving an old multivariate statistics thread ]
On Thu, Nov 13, 2014 at 6:31 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
On 12 October 2014 23:00, Tomas Vondra <t...@fuzzy.cz> wrote:
It however seems to be working suffi
ny reasonable case when it would be measurable, and I
don't expect this to be even measurable in practice.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
slab-allocators-v6.tgz
Description: applicati
-empty entry.
+*/
+ if (set->minFreeChunks == 0)
+ for (idx = 1; idx <= set->chunksPerBlock; idx++)
+ if (set->freelist[idx])
+ {
+ set->minFreeChunks = idx;
+ break;
+
Meh, ignore this report - I've just realized I've been running the
pg_xlogdump binary built for 8kB pages, so the failures are kinda
expected. Sorry about the confusion.
regards
On 11/12/2016 07:52 PM, Tomas Vondra wrote:
Hi,
I'm running some tests on a cluster with 4kB blocks, and it seems
d it never triggered this error.
FWIW the tests were done on bfcd07b4, so fairly recent code.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresq
On 11/03/2016 03:59 PM, Robert Haas wrote:
On Wed, Nov 2, 2016 at 12:49 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 11/01/2016 08:32 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:58 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Damn! You're right of c
On 11/02/2016 11:56 PM, Tomas Vondra wrote:
On 11/02/2016 09:00 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
while eye-balling some explain plans for parallel queries, I got a bit
confused by the row count estimates. I wonder whether I'm alone.
I got co
On 11/02/2016 09:00 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
while eye-balling some explain plans for parallel queries, I got a bit
confused by the row count estimates. I wonder whether I'm alone.
I got confused by that a minute ago, so no you're not
cuting a given node? How will that work if once we get
parallel nested loops and index scans?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@post
On 11/02/2016 05:52 PM, Amit Kapila wrote:
On Wed, Nov 2, 2016 at 9:01 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
The one rem
On 11/02/2016 05:52 PM, Amit Kapila wrote:
On Wed, Nov 2, 2016 at 9:01 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
The one rem
On 11/01/2016 08:32 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:58 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Damn! You're right of course. Who'd guess I need more coffee this early?
Attached is a fix replacing the flag with an array of flags, i
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Honestly, I have no idea what to think about this ...
I think a lot of the details here depend on OS scheduler behavior.
For example, here's one of the
On 11/01/2016 03:29 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:21 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Clearly we need to pass some information to the worker processes, so that
they know whether to instrument the query or not. I don't know if there's a
good non-in
On 11/01/2016 02:15 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 6:35 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
While debugging something on 9.6, I've noticed that auto_explain handles
parallel queries in a slightly strange way - both the leader and all the
worke
s needs to be
decided in the leader, and communicated to the workers somehow.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
On 10/31/2016 02:24 PM, Tomas Vondra wrote:
On 10/31/2016 05:01 AM, Jim Nasby wrote:
On 10/30/16 1:32 PM, Tomas Vondra wrote:
Now, maybe this has nothing to do with PostgreSQL itself, but maybe it's
some sort of CPU / OS scheduling artifact. For example, the system has
36 physical cores, 72
On 10/31/2016 08:43 PM, Amit Kapila wrote:
On Mon, Oct 31, 2016 at 7:58 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 10/31/2016 02:51 PM, Amit Kapila wrote:
And moreover, this setup (single device for the whole cluster) is very
common, we can't just neglect it.
But my main
On 10/31/2016 02:51 PM, Amit Kapila wrote:
On Mon, Oct 31, 2016 at 12:02 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
I've read that analysis, but I'm not sure I see how it explains the "zig
zag" behavior. I do understa
On 10/30/2016 07:32 PM, Tomas Vondra wrote:
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
On Thu, Oct 27, 2016 at 4:15 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
FWIW I plan to run the same test with logged tables - if it shows
similar
regression, I'll be much more w
On 10/31/2016 05:01 AM, Jim Nasby wrote:
On 10/30/16 1:32 PM, Tomas Vondra wrote:
Now, maybe this has nothing to do with PostgreSQL itself, but maybe it's
some sort of CPU / OS scheduling artifact. For example, the system has
36 physical cores, 72 virtual ones (thanks to HT). I find it strange
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
On Thu, Oct 27, 2016 at 4:15 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
FWIW I plan to run the same test with logged tables - if it shows similar
regression, I'll be much more worried, because that's a fairly typical
scenario (
On 10/25/2016 06:10 AM, Amit Kapila wrote:
On Mon, Oct 24, 2016 at 2:48 PM, Dilip Kumar <dilipbal...@gmail.com> wrote:
On Fri, Oct 21, 2016 at 7:57 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:
On Thu, Oct 20, 2016 at 9:03 PM, Tomas Vondra
<tomas.von...@2ndquad
On 10/23/2016 05:26 PM, Petr Jelinek wrote:
On 23/10/16 16:26, Tomas Vondra wrote:
On 10/22/2016 08:30 PM, Tomas Vondra wrote:
...
Moreover, the slab/gen allocators proposed here seem like a better
fit for reorderbuffer, e.g. because they release memory. I haven't
looked at sb_alloc too closely
On 10/22/2016 08:30 PM, Tomas Vondra wrote:
On 10/20/2016 04:43 PM, Robert Haas wrote:
>>
...
The sb_alloc allocator I proposed a couple of years ago would work
well for this case, I think.
Maybe, but it does not follow the Memory Context design at all, if I
understand it correc
enefit from the
"same lifespan" assumption. I don't think sb_alloc can do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.
On 10/21/2016 08:13 AM, Amit Kapila wrote:
On Fri, Oct 21, 2016 at 6:31 AM, Robert Haas <robertmh...@gmail.com> wrote:
On Thu, Oct 20, 2016 at 4:04 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
I then started a run at 96 clients which I accidentally killed sh
tbucket.org/#pgbench-300-unlogged-sync-skip
However, it seems I can also reproduce fairly bad regressions, like for
example this case with data set exceeding shared_buffers:
* http://tvondra.bitbucket.org/#pgbench-3000-unlogged-sync-skip
regards
--
Tomas Vondra http://www.2n
-64
There's a small benefit (~20% on the same client count), and the
performance drop only happens after 72 clients. The patches also
significantly increase variability of the results, particularly for
large client counts.
regards
--
Tomas Vondra http://www.2ndQuadrant.c
On 10/19/2016 02:51 PM, Tomas Vondra wrote:
...
>
Yeah. There are three contexts in reorder buffers:
- changes (fixed size)
- txns (fixed size)
- tuples (variable size)
The first two work perfectly fine with Slab.
The last one (tuples) is used to allocate variable-sized bits, so I've
tr
On 10/19/2016 12:27 AM, Petr Jelinek wrote:
> On 18/10/16 22:25, Robert Haas wrote:
>> On Wed, Oct 5, 2016 at 12:22 AM, Tomas Vondra
>> <tomas.von...@2ndquadrant.com> wrote:
>>> attached is v3 of the patches, with a few minor fixes in Slab, and much
>>>
es on this
machine (and possibly running the same tests on the other one, if I
manage to get access to it again). But I'll leave further analysis of
the collected data up to the patch authors, or some volunteers.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Deve
On 10/11/2016 05:56 PM, Andres Freund wrote:
On 2016-10-11 04:29:31 +0200, Tomas Vondra wrote:
On 10/11/2016 04:07 AM, Andres Freund wrote:
On 2016-10-10 17:46:22 -0700, Andres Freund wrote:
TPC-DS (tpcds.ods)
--
In this case, I'd say the results are less convincing
e to fix).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
scans or hash aggregates). But
the difference is there, even when running the query alone (so it's not
merely due to the randomized ordering).
I wonder whether this is again due to compiler moving stuff around.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL
On 10/08/2016 07:47 AM, Amit Kapila wrote:
On Fri, Oct 7, 2016 at 3:02 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
>
> ...
>
In total, I plan to test combinations of:
(a) Dilip's workload and pgbench (regular and -N)
(b) logged and unlogged tables
(c) scale 300 and s
On 10/05/2016 10:03 AM, Amit Kapila wrote:
On Wed, Oct 5, 2016 at 12:05 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
Hi,
After collecting a lot more results from multiple kernel versions, I can
confirm that I see a significant improvement with 128 and 192 clients,
roughly
On 10/06/2016 07:36 AM, Pavan Deolasee wrote:
On Wed, Oct 5, 2016 at 1:43 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com <mailto:tomas.von...@2ndquadrant.com>> wrote:
...
I can confirm the significant speedup, often by more than 75%
(depending on number of indexes, whet
:
TupleSort main: 33278738504 total in 263 blocks; 78848 free (23 chunks);
33278659656 used
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql
/0x70
SyS_socketcall+0x2a0/0x440
syscall_exit+0x0/0x7c
You should probably talk to SuSe or whoever supports that system.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mai
or 9 of them?
So I think we'll need two counters to track WARM - number of index
tuples we've added, and number of index tuples we've skipped. So
something like blks_hit and blks_read. I'm not sure whether we should
replace the n_tup_hot_upd entirely, or keep it for backwards
compatibilit
no-content-lock56182 62442 61234
group-update 55019 61587 60485
I haven't done much more testing (e.g. with -N to eliminate collisions
on branches) yet, let's see if it changes anything.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
and chunkSize, GenSlabCreate() now
accepts three parameters - minBlockSize, minChunkCount and chunkSize,
and computes the minimum block size (>= minBlockSize), sufficient to
store minChunkCount chunks, each chunkSize bytes. This works much better
in the auto-tuning scenario.
regards
--
Tom
oc() methods - both for Slab and GenSlab. The
current use case (reorderbuffer) does not need that, and it seems like a
can of worms to me.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via p
ectly to SlabRealloc() or
AllocSetRealloc().
The best solution I can think of is adding an alternate version of
AllocSetMethods, pointing to a different AllocSetReset implementation.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA,
g, which did not
include chunk headers etc.).
So I won't fight for this, but I don't see why not to account for it.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing lis
ing it!
;-)
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/02/2016 01:53 AM, Jim Nasby wrote:
On 9/26/16 9:10 PM, Tomas Vondra wrote:
Attached is v2 of the patch, updated based on the review. That means:
+/* make sure the block can store at least one chunk (with 1B for a
bitmap)? */
(and the comment below it)
I find the question
On 10/01/2016 09:59 PM, Andres Freund wrote:
Hi,
On 2016-10-01 20:19:21 +0200, Tomas Vondra wrote:
On 10/01/2016 02:44 AM, Andres Freund wrote:
Hi,
On 2016-07-26 17:43:33 -0700, Andres Freund wrote:
In the attached patch I've attached simplehash.h, which can be
customized by a bunch
n profiles - but I think the above two
conversions are plenty to start with.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make c
t to look into it,
you're welcome.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql
On 09/29/2016 03:47 PM, Robert Haas wrote:
On Wed, Sep 28, 2016 at 9:10 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
I feel like we must be missing something here. If Dilip is seeing
huge speedups and you're seeing nothing, something is different, and
we don't know what it is.
On 09/29/2016 01:59 AM, Robert Haas wrote:
On Wed, Sep 28, 2016 at 6:45 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
So, is 300 too little? I don't think so, because Dilip saw some benefit from
that. Or what scale factor do we think is needed to reproduce the benefit?
My machi
On 09/28/2016 05:39 PM, Robert Haas wrote:
On Tue, Sep 27, 2016 at 5:15 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
So, I got the results from 3.10.101 (only the pgbench data), and it looks
like this:
3.10.101 1 8 16 32 64128
On 09/26/2016 08:48 PM, Tomas Vondra wrote:
On 09/26/2016 07:16 PM, Tomas Vondra wrote:
The averages (over the 10 runs, 5 minute each) look like this:
3.2.80 1 8 16 32 64128192
of tracking them
in freelist, those chunks got freed immediately.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
0001-simple-slab-allocator-fixed-size-allocations.patch
Description: binary/octet-st
On 09/26/2016 07:16 PM, Tomas Vondra wrote:
The averages (over the 10 runs, 5 minute each) look like this:
3.2.80 1 8 16 32 64128192
granular-locking1567 12146 26341 44188
On 09/25/2016 08:48 PM, Petr Jelinek wrote:
Hi Tomas,
On 02/08/16 17:44, Tomas Vondra wrote:
This patch actually includes two new memory allocators (not one). Very
brief summary (for more detailed explanation of the ideas, see comments
at the beginning of slab.c and genslab.c):
Slab
On 09/25/2016 08:33 PM, Oleg Bartunov wrote:
On Sat, Sep 24, 2016 at 11:32 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/22/2016 07:37 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
... I've tried increasing the cache size to 768
entrie
On 09/22/2016 07:37 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
... I've tried increasing the cache size to 768
entries, with vast majority of them (~600) allocated to leaf pages.
Sadly, this seems to only increase the CREATE INDEX duration a bit,
without
On 09/24/2016 06:06 AM, Amit Kapila wrote:
On Fri, Sep 23, 2016 at 8:22 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
...
>>
So I'm using 16GB shared buffers (so with scale 300 everything fits into
shared buffers), min_wal_size=16GB, max_wal_size=128GB, checkpoint timeou
On 09/23/2016 02:59 PM, Pavan Deolasee wrote:
On Fri, Sep 23, 2016 at 6:05 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com <mailto:tomas.von...@2ndquadrant.com>> wrote:
On 09/23/2016 05:10 AM, Amit Kapila wrote:
On Fri, Sep 23, 2016 at 5:14 AM, Tomas Vondra
On 09/23/2016 03:07 PM, Amit Kapila wrote:
On Fri, Sep 23, 2016 at 6:16 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/23/2016 01:44 AM, Tomas Vondra wrote:
...
The 4.5 kernel clearly changed the results significantly:
...
(c) Although it's not visible in the results,
On 09/23/2016 01:44 AM, Tomas Vondra wrote:
...
The 4.5 kernel clearly changed the results significantly:
...
>
(c) Although it's not visible in the results, 4.5.5 almost perfectly
eliminated the fluctuations in the results. For example when 3.2.80
produced this results (10 runs with the s
On 09/23/2016 05:10 AM, Amit Kapila wrote:
On Fri, Sep 23, 2016 at 5:14 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/21/2016 08:04 AM, Amit Kapila wrote:
(c) Although it's not visible in the results, 4.5.5 almost perfectly
eliminated the fluctuations in the r
On 09/23/2016 03:20 AM, Robert Haas wrote:
On Thu, Sep 22, 2016 at 7:44 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
I don't dare to suggest rejecting the patch, but I don't see how
we could commit any of the patches at this point. So perhaps
"returned with feedback"
On 09/21/2016 08:04 AM, Amit Kapila wrote:
On Wed, Sep 21, 2016 at 3:48 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
...
I'll repeat the test on the 4-socket machine with a newer kernel,
but that's probably the last benchmark I'll do for this patch for
now.
Attached are r
On 08/25/2016 03:26 AM, Tomas Vondra wrote:
On 08/25/2016 01:45 AM, Tom Lane wrote:
Over in the thread about the SP-GiST inet opclass, I threatened to
post a patch like this, and here it is.
The basic idea is to track more than just the very latest page
we've used in each of the page
ng too far.
+1 from me to only locking the buffer headers. IMHO that's perfectly
fine for the purpose of this extension.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hacker
On 09/05/2016 06:19 PM, Ivan Kartyshov wrote:
On 09/03/2016 05:04 AM, Tomas Vondra wrote:
This patch needs a rebase, as 06d7fd6e bumped the version to 1.2.
Thank you for a valuable hint.
So, will we get a rebased patch? I see the patch is back in 'needs
review' but there's no new version
solutely no idea what parameters they were using, except that
they were running with synchronous_commit=off. Pgbench shows no such
improvements (at least for me), at least with reasonable parameters.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x
On 09/18/2016 06:08 AM, Amit Kapila wrote:
On Sat, Sep 17, 2016 at 11:25 PM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/17/2016 07:05 AM, Amit Kapila wrote:
On Sat, Sep 17, 2016 at 9:17 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/14/2016 05:29 PM,
On 09/17/2016 07:05 AM, Amit Kapila wrote:
On Sat, Sep 17, 2016 at 9:17 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/14/2016 05:29 PM, Robert Haas wrote:
...
Sure, but you're testing at *really* high client counts here.
Almost nobody is going to benefit from a 5% impro
latency more than
throughput.
So while it's nice to improve throughput in those cases, it's a bit like
a tree falling in the forest without anyone around.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Servic
On 09/17/2016 05:23 AM, Amit Kapila wrote:
On Sat, Sep 17, 2016 at 6:54 AM, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:
On 09/14/2016 06:04 PM, Dilip Kumar wrote:
...
(I've also ran it with 100M rows, called "large" in the results), and
pgbench is running this transa
mark, I see he only ran
the test for 10 seconds, and I'm not sure how many runs he did, warmup
etc. Dilip, can you provide additional info?
I'll ask someone else to redo the benchmark after the weekend to make
sure it's not actually some stupid mistake of mine.
regards
--
Tomas Vondra
On 09/15/2016 06:40 PM, Robert Haas wrote:
On Thu, Sep 15, 2016 at 12:22 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
Tomas Vondra <tomas.von...@2ndquadrant.com> writes:
On 09/14/2016 07:57 PM, Tom Lane wrote:
People who are vacuuming because they are out of disk space will be very
On 09/14/2016 05:17 PM, Robert Haas wrote:
I am kind of doubtful about this whole line of investigation because
we're basically trying pretty hard to fix something that I'm not sure
is broken.I do agree that, all other things being equal, the TID
lookups will probably be faster with a
On 09/14/2016 07:57 PM, Tom Lane wrote:
Pavan Deolasee writes:
On Wed, Sep 14, 2016 at 10:53 PM, Alvaro Herrera
wrote:
One thing not quite clear to me is how do we create the bitmap
representation starting from the array representation in
Hi,
Thanks for looking into this!
On 09/12/2016 04:08 PM, Dean Rasheed wrote:
On 3 August 2016 at 02:58, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote:
Attached is v19 of the "multivariate stats" patch series
Hi,
I started looking at this - just at a very high level
On 09/07/2016 01:13 PM, Amit Kapila wrote:
> On Wed, Sep 7, 2016 at 1:08 AM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
>> On 09/06/2016 04:49 AM, Amit Kapila wrote:
>>> On Mon, Sep 5, 2016 at 11:34 PM, Tomas Vondra
>>> <tomas.von...@2ndquadrant.c
On 09/06/2016 04:49 AM, Amit Kapila wrote:
> On Mon, Sep 5, 2016 at 11:34 PM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
>>
>>
>> On 09/05/2016 06:03 AM, Amit Kapila wrote:
>>> So, in short we have to compare three
>>> approaches here.
On 09/05/2016 06:03 AM, Amit Kapila wrote:
> On Mon, Sep 5, 2016 at 3:18 AM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
>> Hi,
>>
>> This thread started a year ago, different people contributed various
>> patches, some of which already go
some tests on the hardware I have available, but
I'm not willing spending my time untangling the discussion.
thanks
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (p
into the block where they are used. But the latter is
probably matter of personal taste, I guess.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
indexonlyscan5-tomas.patch
Description: binary/o
201 - 300 of 1369 matches
Mail list logo