af pages will be created but merely that the inserts
will touch a different (possibly existing) leaf page. That's a direct
consequence of the inherent UUID randomness.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Trai
hi,
On 10/27/2017 09:34 AM, Simon Riggs wrote:
> On 27 October 2017 at 07:20, Robert Haas wrote:
>> On Thu, Oct 19, 2017 at 10:15 PM, Tomas Vondra
>> wrote:
>>> Let's see a query like this:
>>>
>>> select * from bloom_test
>>&g
Hi,
On 10/27/2017 07:17 PM, Nico Williams wrote:
> On Thu, Oct 19, 2017 at 10:15:32PM +0200, Tomas Vondra wrote:
>
> A bloom filter index would, indeed, be wonderful.
>
> Comments:
>
> + * We use an optimisation that initially we store the uint32 values directly,
&g
Hi,
On 10/27/2017 05:22 PM, Sokolov Yura wrote:
>
> Hi, Tomas
>
> BRIN bloom index is a really cool feature, that definitely should be in
> core distribution (either in contrib or builtin)!!!
>
> Small suggestion for algorithm:
>
> It is well known practice
hi,
On 10/28/2017 02:41 AM, Nico Williams wrote:
> On Fri, Oct 27, 2017 at 10:06:58PM +0200, Tomas Vondra wrote:
>>> + * We use an optimisation that initially we store the uint32 values
>>> directly,
>>> + * without the extra hashing step. And only later fillin
th the message, only the value (offset) changes.
The stack trace always looks exactly the same - see the attachment.
At first it seemed the idxrel is always the index on 'e' (i.e. the UUID
column), but it seems I also got failures on the other indexes.
regards
--
Tomas Vondra
FWIW I can reproduce this on REL_10_STABLE, but not on REL9_6_STABLE. So
it seems to be due to something that changed in the last release.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent
rm Michael's findings - I've been unable to
reproduce the issue on 1a4be103a5 even after 20 minutes, and on
24992c6db9 it failed after only 2.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
tps://www.postgresql.org/message-id/5d78b774-7e9c-c94e-12cf-fef51cc89b1a%402ndquadrant.com
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
0001-Pass-all-keys-to-BRIN-consistent-function-at-once.patch.gz
De
then the gains become much
more modest - not because the device could not handle more, but because
of the prefetch/processing ratio reached the optimal value.
But all this is actually per-process, if you can run multiple backends
(particularly when doing bitmap index scans), I'm sure you'l
or handling / killing
suspended workers (which didn't occur to me before as a possible issue
at all, so that's for pointing that out). But that's a significantly
more limited issue to fix than all the parallel-unsafe bits.
Now, I agree this is somewhat more limited than I hoped fo
select count(*) from brin_test where a = 0;
count
---
9062
(1 row)
test=# set enable_bitmapscan = off;
SET
test=# select count(*) from brin_test where a = 0;
count
---
9175
(1 row)
Attached is a SQL script with commands I used. You'll need
On 10/31/2017 11:44 PM, Tomas Vondra wrote:
> ...
> Unfortunately, I think we still have a problem ... I've been wondering
> if we end up producing correct indexes, so I've done a simple test.
>
> 1) create the table as before
>
> 2) let the insert + vacuum run
Hi,
On 11/02/2017 06:45 PM, Alvaro Herrera wrote:
> Tomas Vondra wrote:
>
>> Unfortunately, I think we still have a problem ... I've been wondering
>> if we end up producing correct indexes, so I've done a simple test.
>
> Here's a proposed patch that s
On 09/28/2016 05:39 PM, Robert Haas wrote:
On Tue, Sep 27, 2016 at 5:15 PM, Tomas Vondra
wrote:
So, I got the results from 3.10.101 (only the pgbench data), and it looks
like this:
3.10.101 1 8 16 32 64128192
On 09/29/2016 01:59 AM, Robert Haas wrote:
On Wed, Sep 28, 2016 at 6:45 PM, Tomas Vondra
wrote:
So, is 300 too little? I don't think so, because Dilip saw some benefit from
that. Or what scale factor do we think is needed to reproduce the benefit?
My machine has 256GB of ram, so I can e
On 09/29/2016 03:47 PM, Robert Haas wrote:
On Wed, Sep 28, 2016 at 9:10 PM, Tomas Vondra
wrote:
I feel like we must be missing something here. If Dilip is seeing
huge speedups and you're seeing nothing, something is different, and
we don't know what it is. Even if the test case is
y accounting, so if you want to look into it,
you're welcome.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to
ast hash
table and frequently shows up in profiles - but I think the above two
conversions are plenty to start with.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list
On 10/01/2016 09:59 PM, Andres Freund wrote:
Hi,
On 2016-10-01 20:19:21 +0200, Tomas Vondra wrote:
On 10/01/2016 02:44 AM, Andres Freund wrote:
Hi,
On 2016-07-26 17:43:33 -0700, Andres Freund wrote:
In the attached patch I've attached simplehash.h, which can be
customized by a bun
On 10/02/2016 01:53 AM, Jim Nasby wrote:
On 9/26/16 9:10 PM, Tomas Vondra wrote:
Attached is v2 of the patch, updated based on the review. That means:
+/* make sure the block can store at least one chunk (with 1B for a
bitmap)? */
(and the comment below it)
I find the question to be
Nice patch, I enjoyed reading it!
;-)
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ven if we
don't include that into work_mem it's a reasonably small error (easily
smaller than errors in the pre-9.5 HashJoin accounting, which did not
include chunk headers etc.).
So I won't fight for this, but I don't see why not to account for it.
regards
--
Tomas Vond
) call does not go
through GenSlabRealloc() at all, but directly to SlabRealloc() or
AllocSetRealloc().
The best solution I can think of is adding an alternate version of
AllocSetMethods, pointing to a different AllocSetReset implementation.
regards
--
Tomas Vondra http://www.
elog(ERROR) in the realloc() methods - both for Slab and GenSlab. The
current use case (reorderbuffer) does not need that, and it seems like a
can of worms to me.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training
and chunkSize, GenSlabCreate() now
accepts three parameters - minBlockSize, minChunkCount and chunkSize,
and computes the minimum block size (>= minBlockSize), sufficient to
store minChunkCount chunks, each chunkSize bytes. This works much better
in the auto-tuning scenario.
regards
--
Tom
juqrYzQf=icsdh3u4...@mail.gmail.com
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
11
no-content-lock56182 62442 61234
group-update 55019 61587 60485
I haven't done much more testing (e.g. with -N to eliminate collisions
on branches) yet, let's see if it changes anything.
regards
--
Tomas Vondra http://www.2ndQua
s that mean the update added index tuple to 1 index
or 9 of them?
So I think we'll need two counters to track WARM - number of index
tuples we've added, and number of index tuples we've skipped. So
something like blks_hit and blks_read. I'm not sure whether we should
replace
0
SyS_send+0x50/0x70
SyS_socketcall+0x2a0/0x440
syscall_exit+0x0/0x7c
You should probably talk to SuSe or whoever supports that system.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pg
:
TupleSort main: 33278738504 total in 263 blocks; 78848 free (23 chunks);
33278659656 used
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
On 10/06/2016 07:36 AM, Pavan Deolasee wrote:
On Wed, Oct 5, 2016 at 1:43 PM, Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>> wrote:
...
I can confirm the significant speedup, often by more than 75%
(depending on number of indexes, whether the data set fits into RAM,
On 10/05/2016 10:03 AM, Amit Kapila wrote:
On Wed, Oct 5, 2016 at 12:05 PM, Tomas Vondra
wrote:
Hi,
After collecting a lot more results from multiple kernel versions, I can
confirm that I see a significant improvement with 128 and 192 clients,
roughly by 30%:
64
On 10/08/2016 07:47 AM, Amit Kapila wrote:
On Fri, Oct 7, 2016 at 3:02 PM, Tomas Vondra
wrote:
>
> ...
>
In total, I plan to test combinations of:
(a) Dilip's workload and pgbench (regular and -N)
(b) logged and unlogged tables
(c) scale 300 and scale 3000 (both fit
similar plans (no bitmap index scans or hash aggregates). But
the difference is there, even when running the query alone (so it's not
merely due to the randomized ordering).
I wonder whether this is again due to compiler moving stuff around.
regards
--
Tomas Vondra ht
d to think about
improving the join estimates, somehow. Because it's by far the most
significant source of issues (and the hardest one to fix).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/11/2016 05:56 PM, Andres Freund wrote:
On 2016-10-11 04:29:31 +0200, Tomas Vondra wrote:
On 10/11/2016 04:07 AM, Andres Freund wrote:
On 2016-10-10 17:46:22 -0700, Andres Freund wrote:
TPC-DS (tpcds.ods)
--
In this case, I'd say the results are less convincing.
to do.
I'll take care of collecting data for the remaining cases on this
machine (and possibly running the same tests on the other one, if I
manage to get access to it again). But I'll leave further analysis of
the collected data up to the patch authors, or some volunteers.
regards
On 10/19/2016 12:27 AM, Petr Jelinek wrote:
> On 18/10/16 22:25, Robert Haas wrote:
>> On Wed, Oct 5, 2016 at 12:22 AM, Tomas Vondra
>> wrote:
>>> attached is v3 of the patches, with a few minor fixes in Slab, and much
>>> larger fixes in G
On 10/19/2016 02:51 PM, Tomas Vondra wrote:
...
>
Yeah. There are three contexts in reorder buffers:
- changes (fixed size)
- txns (fixed size)
- tuples (variable size)
The first two work perfectly fine with Slab.
The last one (tuples) is used to allocate variable-sized bits, so I'
g/#pgbench-3000-unlogged-sync-skip-64
* http://tvondra.bitbucket.org/#pgbench-3000-unlogged-sync-noskip-64
There's a small benefit (~20% on the same client count), and the
performance drop only happens after 72 clients. The patches also
significantly increase variability of the results,
On 10/20/2016 07:59 PM, Robert Haas wrote:
On Thu, Oct 20, 2016 at 11:45 AM, Robert Haas wrote:
On Thu, Oct 20, 2016 at 3:36 AM, Dilip Kumar wrote:
On Thu, Oct 13, 2016 at 12:25 AM, Robert Haas wrote:
>>
...
So here's my theory. The whole reason why Tomas is having difficulty
On 10/21/2016 08:13 AM, Amit Kapila wrote:
On Fri, Oct 21, 2016 at 6:31 AM, Robert Haas wrote:
On Thu, Oct 20, 2016 at 4:04 PM, Tomas Vondra
wrote:
I then started a run at 96 clients which I accidentally killed shortly
before it was scheduled to finish, but the results are not much
different
"same lifespan" assumption. I don't think sb_alloc can do that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
On 10/22/2016 08:30 PM, Tomas Vondra wrote:
On 10/20/2016 04:43 PM, Robert Haas wrote:
>>
...
The sb_alloc allocator I proposed a couple of years ago would work
well for this case, I think.
Maybe, but it does not follow the Memory Context design at all, if I
understand it correctly.
On 10/23/2016 05:26 PM, Petr Jelinek wrote:
On 23/10/16 16:26, Tomas Vondra wrote:
On 10/22/2016 08:30 PM, Tomas Vondra wrote:
...
Moreover, the slab/gen allocators proposed here seem like a better
fit for reorderbuffer, e.g. because they release memory. I haven't
looked at sb_alloc too cl
On 10/25/2016 06:10 AM, Amit Kapila wrote:
On Mon, Oct 24, 2016 at 2:48 PM, Dilip Kumar wrote:
On Fri, Oct 21, 2016 at 7:57 AM, Dilip Kumar wrote:
On Thu, Oct 20, 2016 at 9:03 PM, Tomas Vondra
wrote:
In the results you've posted on 10/12, you've mentioned a regression with
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
On Thu, Oct 27, 2016 at 4:15 AM, Tomas Vondra
wrote:
FWIW I plan to run the same test with logged tables - if it shows similar
regression, I'll be much more worried, because that's a fairly typical
scenario (logged tables, data se
On 10/31/2016 05:01 AM, Jim Nasby wrote:
On 10/30/16 1:32 PM, Tomas Vondra wrote:
Now, maybe this has nothing to do with PostgreSQL itself, but maybe it's
some sort of CPU / OS scheduling artifact. For example, the system has
36 physical cores, 72 virtual ones (thanks to HT). I find it st
On 10/30/2016 07:32 PM, Tomas Vondra wrote:
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
On Thu, Oct 27, 2016 at 4:15 AM, Tomas Vondra
wrote:
FWIW I plan to run the same test with logged tables - if it shows
similar
regression, I'll be much more worried, because that's a fair
On 10/31/2016 02:51 PM, Amit Kapila wrote:
On Mon, Oct 31, 2016 at 12:02 AM, Tomas Vondra
wrote:
Hi,
On 10/27/2016 01:44 PM, Amit Kapila wrote:
I've read that analysis, but I'm not sure I see how it explains the "zig
zag" behavior. I do understand that shifting the cont
On 10/31/2016 08:43 PM, Amit Kapila wrote:
On Mon, Oct 31, 2016 at 7:58 PM, Tomas Vondra
wrote:
On 10/31/2016 02:51 PM, Amit Kapila wrote:
And moreover, this setup (single device for the whole cluster) is very
common, we can't just neglect it.
But my main point here really is that the
On 10/31/2016 02:24 PM, Tomas Vondra wrote:
On 10/31/2016 05:01 AM, Jim Nasby wrote:
On 10/30/16 1:32 PM, Tomas Vondra wrote:
Now, maybe this has nothing to do with PostgreSQL itself, but maybe it's
some sort of CPU / OS scheduling artifact. For example, the system has
36 physical core
nable
instrumentation only for sample queries. So I guess this needs to be
decided in the leader, and communicated to the workers somehow.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
On 11/01/2016 02:15 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 6:35 PM, Tomas Vondra
wrote:
While debugging something on 9.6, I've noticed that auto_explain handles
parallel queries in a slightly strange way - both the leader and all the
workers log their chunk of the query (i.e
On 11/01/2016 03:29 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:21 AM, Tomas Vondra
wrote:
Clearly we need to pass some information to the worker processes, so that
they know whether to instrument the query or not. I don't know if there's a
good non-invasive way to do th
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
wrote:
Honestly, I have no idea what to think about this ...
I think a lot of the details here depend on OS scheduler behavior.
For example, here's one of the first scalability graphs I ever did:
On 11/01/2016 08:32 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:58 AM, Tomas Vondra
wrote:
Damn! You're right of course. Who'd guess I need more coffee this early?
Attached is a fix replacing the flag with an array of flags, indexed by
ParallelMasterBackendId. Hopefully tha
On 11/02/2016 05:52 PM, Amit Kapila wrote:
On Wed, Nov 2, 2016 at 9:01 AM, Tomas Vondra
wrote:
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
wrote:
The one remaining thing is the strange zig-zag behavior, but that might
easily be a due to
On 11/02/2016 05:52 PM, Amit Kapila wrote:
On Wed, Nov 2, 2016 at 9:01 AM, Tomas Vondra
wrote:
On 11/01/2016 08:13 PM, Robert Haas wrote:
On Mon, Oct 31, 2016 at 5:48 PM, Tomas Vondra
wrote:
The one remaining thing is the strange zig-zag behavior, but that might
easily be a due to
number of
workers executing a given node? How will that work if once we get
parallel nested loops and index scans?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing lis
On 11/02/2016 09:00 PM, Tom Lane wrote:
Tomas Vondra writes:
while eye-balling some explain plans for parallel queries, I got a bit
confused by the row count estimates. I wonder whether I'm alone.
I got confused by that a minute ago, so no you're not alone. The problem
is even wor
On 11/02/2016 11:56 PM, Tomas Vondra wrote:
On 11/02/2016 09:00 PM, Tom Lane wrote:
Tomas Vondra writes:
while eye-balling some explain plans for parallel queries, I got a bit
confused by the row count estimates. I wonder whether I'm alone.
I got confused by that a minute ago, so no y
On 11/03/2016 03:59 PM, Robert Haas wrote:
On Wed, Nov 2, 2016 at 12:49 PM, Tomas Vondra
wrote:
On 11/01/2016 08:32 PM, Robert Haas wrote:
On Tue, Nov 1, 2016 at 10:58 AM, Tomas Vondra
wrote:
Damn! You're right of course. Who'd guess I need more coffee this early?
Attache
s many times and it never triggered this error.
FWIW the tests were done on bfcd07b4, so fairly recent code.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-ha
Meh, ignore this report - I've just realized I've been running the
pg_xlogdump binary built for 8kB pages, so the failures are kinda
expected. Sorry about the confusion.
regards
On 11/12/2016 07:52 PM, Tomas Vondra wrote:
Hi,
I'm running some tests on a cluster with 4kB block
y need to do that when the block
+* got full (otherwise we know the current block is the right one).
+* We'll simply walk the freelist until we find a non-empty entry.
+*/
+ if (set->minFreeChunks == 0)
+ for (idx = 1; idx <= set->chunksPerBlock; idx++)
+
s in SlabAlloc - I
haven't found any reasonable case when it would be measurable, and I
don't expect this to be even measurable in practice.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 11/21/2016 11:10 PM, Robert Haas wrote:
[ reviving an old multivariate statistics thread ]
On Thu, Nov 13, 2014 at 6:31 AM, Simon Riggs wrote:
On 12 October 2014 23:00, Tomas Vondra wrote:
It however seems to be working sufficiently well at this point, enough
to get some useful feedback
) (see Section 24.3.3).
which is pretty damn useless, when you're investigating an issue. And
the referenced section (Making a Base Backup Using the Low Level API)
does not clearly explain how this maps to pg_start_backup(_,?).
What about adding a paragraph into pg_basebackup docs, explai
ng declarations as warnings,
which confuses configure:
https://bugs.llvm.org//show_bug.cgi?id=20820
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hac
how would the block look if it
was written somewhere else, for example).
BTW I've noticed the pageinspect version is 1.6, but we only have
pageinspect--1.5.sql (and upgrade script to 1.6). Not sure that's
entirely intentional?
regards
--
Tomas Vondra http://www
On 02/11/2017 01:38 AM, Tomas Vondra wrote:
Incidentally, I've been dealing with a checksum failure reported by a
customer last week, and based on the experience I tend to agree that we
don't have the tools needed to deal with checksum failures. I think such
tooling should be a '
are really orthogonal
features. Even with expression indexes, the statistics are per
attribute, and the attributes are treated as independent.
There was a proposal to also allow creating statistics on expressions
(without having to create an index), but that's not supported yet.
regards
5 17500180160
10 15380330
20 ?750670
Although the results are quite noisy.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote
el Core" (and perhaps the even older P6).
On AMD, it's a bit worse - the first micro-architecture with SSE4.1 was
Bulldozer (late 2011). So quite a few CPUs out there, even if most
people use Intel.
In any case, we can't just build x86-64 packages with compile-ti
seems like the culprit - the condition seems wrong. I wonder
why I haven't seen it during my tests, though ...
I'll work on getting slab committed first, and then review / edit /
commit generation.c later. One first note there is that I'm wondering
if generation.c is a too ge
ot really
give much time to look at it.
The changes seem fine to me, thanks for spending time on this.
Thanks
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hac
that slab.c's idea of
StandardChunkHeader's size doesn't match what mcxt.c think it is
(because slab.c simply embeds StandardChunkHeader, but mcxt uses
MAXALIGN(sizeof(StandardChunkHeader))). That's not good, but I don't
quite see how that'd cause the issue, since
v7l) that fails with the same issue,
so if you need to test the patch, let me know.
While building, I've also noticed a bunch of warnings about string
formatting, attached is a patch that that fixes those.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Devel
On 02/27/2017 04:07 PM, Andres Freund wrote:
On February 27, 2017 6:14:20 AM PST, Tomas Vondra
wrote:
On 02/27/2017 01:02 PM, Andres Freund wrote:
Hi,
On 2017-02-27 03:17:32 -0800, Andres Freund wrote:
I'll work on getting slab committed first, and then review /
edit / commit generat
orward with something like this patch in all
branches, and only use Tomas' patch in master, because they're
considerably larger.
So you've tried to switch hashjoin to the slab allocators? Or what have
you compared?
regards
--
Tomas Vondra http://www.2ndQuadra
it fixes the test_deconding tests
on the rpi3 board I'm using for testing.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
slab-fix.patch
Description: binary/octet-stream
--
Sent via pgsql-hack
places.
It also removes the comment attribution to "bjm" from syscache.c,
because after modifying it's no longer the original comment.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
cat
Given this, routines like pfree their corresponding context ...
- missing "find" or "determine"
I also see you've explicitly mentioned the callbacks were added in 9.5.
Doesn't that somewhat reintroduce the historical account?
regards
--
Tomas Vondra
lling MemoryContextContains()
assume they can't receive memory not allocated as a simple chunk by
palloc(). If that's not the case, it's likely broken.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training
On 03/02/2017 07:42 AM, Kyotaro HORIGUCHI wrote:
Hello,
At Thu, 2 Mar 2017 04:05:34 +0100, Tomas Vondra wrote
in
OK,
attached is v24 of the patch series, addressing most of the reported
issues and comments (at least I believe so). The main changes are:
Unfortunately, 0002 conflicts with
cks when they get empty.
It's not entirely FIFO though, because the transactions interleave, so
later blocks may be released first. But the "allocated close, freed
close" is still there. So perhaps something like "TemporalSet" or
something like that would be a better name?
On 03/03/2017 05:09 AM, Robert Haas wrote:
On Mon, Feb 20, 2017 at 9:43 PM, Tomas Vondra
wrote:
BTW I've noticed the pageinspect version is 1.6, but we only have
pageinspect--1.5.sql (and upgrade script to 1.6). Not sure that's entirely
intentional?
Actually, that's th
On 03/04/2017 02:58 AM, Andres Freund wrote:
On 2017-03-01 22:19:30 -0800, Andres Freund wrote:
On 2017-03-02 04:36:23 +0100, Tomas Vondra wrote:
I've noticed two minor typos:
1) That is solved this by creating ...
- extra "this"
2) Given this, routines like pfree thei
On 03/04/2017 02:08 PM, Peter Eisentraut wrote:
On 3/3/17 09:03, Tomas Vondra wrote:
Damn. In my defense, the patch was originally created for an older
PostgreSQL version (to investigate issue on a production system), which
used that approach to building values. Should have notice it, though
only use MemoryContextContains() for 'flag=true' chunks.
The question however is whether this won't make the optimization
pointless. I also, wonder how much we save by this optimization and how
widely it's used? Can someone point me to some numbers?
regards
--
Tomas Vondr
On 03/06/2017 10:13 PM, Peter Eisentraut wrote:
On 3/3/17 09:03, Tomas Vondra wrote:
Attached is v2, fixing both issues.
I wonder if
+ bytea *raw_page = PG_GETARG_BYTEA_P(0);
+ uargs->page = VARDATA(raw_page);
is expected to work reliably, without copying the argument t
On 03/06/2017 08:08 PM, Andres Freund wrote:
Hi,
On 2017-03-06 19:49:56 +0100, Tomas Vondra wrote:
On 03/06/2017 07:05 PM, Robert Haas wrote:
On Mon, Mar 6, 2017 at 12:44 PM, Andres Freund wrote:
On 2017-03-06 12:40:18 -0500, Robert Haas wrote:
On Wed, Mar 1, 2017 at 5:55 PM, Andres Freund
On 03/07/2017 12:19 AM, Andres Freund wrote:
On 2017-03-02 22:51:09 +0100, Tomas Vondra wrote:
Attaches is the last part of the patch series, rebased to current master and
adopting the new chunk header approach.
Something seems to have gone awry while sending that - the attachement
is a
s fixing, I guess.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
e block number at all.
5) Code duplication in bt_page_items() and bt_page_items_bytea() needs
to be handled.
Yes. If we adopt the approach proposed by Peter Eisentraut (redirecting
the old bt_page_items using a SQL function calling the new one), it will
also make the error messages consisten
On 03/13/2017 09:03 AM, Andres Freund wrote:
Hi,
On 2017-03-12 05:40:51 +0100, Tomas Vondra wrote:
I wanted to do a bit of testing and benchmarking on this, but 0004 seems to
be a bit broken.
Well, "broken" in the sense that it's already outdated, because other
stuff that got
iously.
Thanks for the work on the patch, BTW.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 15.5.2014 00:41, Tomas Vondra wrote:
> On 13.5.2014 20:42, Tomas Vondra wrote:
>> On 10.5.2014 20:21, Tomas Vondra wrote:
>>> On 9.5.2014 00:47, Tomas Vondra wrote:
>>>
>>> And I've requested 6 more animals - two for each compiler. One set for
>>&g
301 - 400 of 1517 matches
Mail list logo