On Thu, Aug 31, 2017 at 1:52 AM, Jeff Davis wrote:
> Updated patch attached. Changelog:
>
> * Rebased
> * Changed MJCompare to return an enum as suggested, but it has 4
> possible values rather than 3.
> * Added support for joining on contains or contained by (@> or <@) a
m as suggested, but it has 4
possible values rather than 3.
* Added support for joining on contains or contained by (@> or <@) and
updated tests.
Regards,
Jeff Davis
diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml
index 9557c16..84578a7 100644
*** a/doc/src/sgml/r
).
* Better integration with the catalog so that users could add their
own types that support range merge join.
Thank you for the review.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
o the latter.
Once we support more pushdowns to partitions, the only question is:
what are your join keys and what are your grouping keys?
Text is absolutely a normal join key or group key. Consider joins on a
user ID or grouping by a model number.
Regards,
Jeff Davis
--
Sent via pgsq
fine with either option.
> 2. Add an option like --dump-partition-data-with-parent. I'm not sure
> who originally proposed this, but it seems that everybody likes it.
> What we disagree about is the degree to which it's sufficient. Jeff
> Davis thinks it doesn't go
releases later we make the typical cases work out of the box. I'm fine with
it as long as we don't paint ourselves into a corner.
Of course we still have work to do on the hash functions. We should solve
at least the most glaring portability problems, and try to harmonize the
hash opfamilies. If you agree, I can put together a patch or two.
Regards,
Jeff Davis
. partitioning
on date)
* both offer some maintenance benefits (e.g. reindex one partition at
a time), though range partitioning seems like it offers better
flexibility here in some cases
I lean toward separating the concepts, but Robert is making some
reasonable arguments and I could be convinced.
x27;t mean that we should necessarily forbid them, but it should
make us question whether combining range and hash partitions is really
the right design.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ould take care of the naming problem.
> If Java has portable hash functions, why can't we?
Java standardizes on a particular unicode encoding (utf-16). Are you
suggesting that we do the same? Or is there another solution that I am
missing?
Regards,
Jeff Davis
7;t surprised, I think
users will understand why these aren't quite the same concepts.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
oot as Andres and
others suggested, and disable a lot of logical partitioning
capabilities. I'd be a little worried about what we do with
attaching/detaching, though.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ybe hash partitions are really a
"semi-logical" partitioning that the optimizer understands, but where
things like per-partition check constraints don't make sense.
Regards,
Jeff Davis
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.or
/message-id/CAMp0ubfNMSGRvZh7N7TRzHHN5tz0ZeFP13Aq3sv6b0H37fdcPg%40mail.gmail.com
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ot always desirable for parallelism.
Hash partitioning doesn't have these issues and goes very nicely with
parallel query.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ard
currently.
But hash partitioning is too valuable to give up on entirely. I think
we should consider supporting a limited subset of types for now with
something not based on the hash am.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
very
important. If we run out of bits, we can just salt the hash function
differently and get more hash bits. This is not urgent and I believe
we should just implement salts when and if some algorithm needs them.
Regards,
Jeff Davis
[1] You can a kind of mirroring in the hash outputs indicati
On Tue, May 2, 2017 at 7:01 PM, Robert Haas wrote:
> On Tue, May 2, 2017 at 9:01 PM, Jeff Davis wrote:
>> 1. Consider a partition-wise join of two hash-partitioned tables. If
>> that's a hash join, and we just use the hash opclass, we immediately
>> lose some useful
th better push-downs, which
will be good for parallel query.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
use in this
> function)
>&op_strategy,
>
Looks like filterdiff destroyed my patch from git. Attaching unified
version against master 3820c63d.
Thanks!
Jeff Davis
diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml
index 9557c16..84578a7 100644
--- a/doc
rees are faster.
I don't quite follow. I don't think any of these proposals uses btree,
right? Range merge join doesn't need any index, your proposal uses
gist, and PgSphere's crossmatch uses gist.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Apr 11, 2017 at 8:35 AM, Alexander Korotkov
wrote:
> On Tue, Apr 11, 2017 at 5:46 PM, Jeff Davis wrote:
>> Do you have a sense of how this might compare with range merge join?
>
>
> If you have GiST indexes over ranges for both sides of join, then this
> method co
ome experimental evaluation here:
> http://www.adass2016.inaf.it/images/presentations/10_Korotkov.pdf
Do you have a sense of how this might compare with range merge join?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to y
On Tue, Apr 11, 2017 at 12:17 AM, Jeff Davis wrote:
> Version 2 attached. Fixed a few issues, expanded tests, added docs.
It looks like the CF app only listed my perf test script. Re-attaching
rangejoin-v2.patch so that it appears in the CF app. Identical to
other rangejoin-v2.patch.
Rega
en if
the input relations are subqueries.
Regards,
Jeff Davis
On Thu, Apr 6, 2017 at 1:43 AM, Jeff Davis wrote:
>
> Example:
>
>
> Find different people using the same website at the same time:
>
> create table session(sessionid text, username text, duri
ow (though
more investigation might be useful here). Also, it doesn't provide any
alternative to the nestloop-with-inner-index we already offer at the
leaf level today.
Regards,
Jeff Davis
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index a18ab43..1110c1e
d the assembly we wanted
rather than how much it actually improved performance. Can someone
please point me to the numbers? Do they refute the conclusions in the
paper, or are we concerned about a wider range of processors?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-ha
On Sun, Jan 22, 2017 at 10:32 PM, Jeff Davis wrote:
> On Sat, Jan 21, 2017 at 4:25 AM, Andrew Borodin wrote:
> One idea I had that might be simpler is to use a two-stage page
> delete. The first stage would remove the link from the parent and mark
> the page deleted, but leave th
d recycle it in a new place in the
tree.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
#x27;t see a problem in your patch, but again, we are
breaking an assumption that future developers might make.
Your patch solves a real problem (a 90-second stall is clearly not
good) and I don't want to dismiss that. But I'd like to consider some
alternatives that may not have these do
med. I think this would give good concurrency even for K=2.
I just had this idea now, so I didn't think it through very well.
What do you think?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ng tree, or leftmost of the subtree that
you are removing pages from?
* In order to keep out concurrent reads, you need to lock/unlock the
left page while holding exclusive lock on the page being deleted, but
I didn't see how that happens exactly in the code. Where does that
happen?
Regards,
tion.
Perhaps I didn't understand your point?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
upport in CREATE FUNCTION.
I think the execution is pretty good, except that (a) we need to keep
the state in fn_extra rather than the winstate; and (b) we should get
rid of the bitmaps and just do a naive scan unless we really think
non-constant offsets will be important. We can always optimize
hed. Please take a quick look.
Regards,
Jeff Davis
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
***
*** 143,152 static int errordata_stack_depth = -1; /* index of topmost active frame */
static int recursion_depth = 0; /* to detect
;m' format) and
once for the regular log ('m', 'n', or 't'). If the regular log uses
'm', that would be some wasted cycles formatting it the same way twice.
Is it worth a little extra ugliness to cache both the timeval and the
formatted string?
Regards,
On Mon, 2015-09-07 at 17:47 -0300, Alvaro Herrera wrote:
> Jeff Davis wrote:
> > On Sun, 2015-03-22 at 19:47 +0100, Andres Freund wrote:
> > > On 2015-03-22 00:47:12 +0100, Tomas Vondra wrote:
> > > > from time to time I need to correlate PostgreSQL logs to other lo
ugh nothing is free, the cost seems very low, and at least
three people have expressed interest in this patch.
What tips the balance is that we expose the unix epoch in the pgbench
logs, as Tomas points out.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers
review!
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
roceed with the HashAgg patch, with a
heuristic for internal types.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
be a single palloc'd chunk. But since we can't
> spill those aggregates to disk *anyway*, that doesn't really matter.
So would it be acceptable to just ignore the memory consumed by
"internal", or come up with some heuristic?
Regards,
Jeff Davis
--
Se
things.
After talking with a few people at PGCon, small noisy differences in CPU
timings can appear for almost any tweak to the code, and aren't
necessarily cause for major concern.
Regards,
Jeff Davis
[1] pgbench -i -s 300, then do the following 3 times each for master,
v11, and v1
this been explored already?
>
That's a good idea, as it would be faster than recursing. Also, the
number of parents can't change, so we can just make an array, and that
would be quite fast to update. Unless I'm missing something, this sounds
like a nice solution. I
ny
complexity to the patch (if it does, let me know).
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ion among parallel workers.
Some specifics of the Funnel operator seem to be a part of tqueue, which
doesn't make sense to me. For instance, reading from the set of queues
in a round-robin fashion is part of the Funnel algorithm, and doesn't
seem suitable for a generic tuple communicati
re changes in tqueue.c. But "tqueue" is a
generic name for the file, so something seems off. Either we should
explicitly make it the supporting routines for the Funnel operator, or
we should try to generalize it a little.
I still have quite a bit to look at, but this is a start.
Regar
heapam.c,tqueue.c, etc and all other generic
> (non-nodes specific) code.
Did you consider passing tuples through the tqueue by reference rather
than copying? The page should be pinned by the worker process, but
perhaps that's a bad assumption to make?
Regards,
Jeff Davis
-
also (I hope) be convenient for Simon and David Rowley, who
have been hacking on aggregates in general.
Anyone see a reason I shouldn't give this a try?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
cific machine.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
eemed solvable. What do you see as a major unsolved
issue?
If I recall, you were concerned about things like array_agg, where an
individual state could get larger than work_mem. That's a valid concern,
but it's not the problem I was trying to solve.
Regards,
Jeff Davis
[1]
htt
er and more accurate.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 500,505 AllocSetContextCreate(MemoryContext parent,
--- 500,508
errdetail("Failed while creating memory context \"%s\".",
ustification at all for this behavior -- postgres
should not decide to coerce it in the first place if it's going to
fail.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
backwards compatibility.
> Distinguishing between "untyped" literals and "unknown type" literals
> seems promising concept to aid in understanding the difference in the
> face of not being able (or wanting) to actually change the behavior.
Not sure I understand that proposal, can you elaborate?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Moving thread to -hackers.
On Wed, Apr 8, 2015 at 11:18 PM, Jeff Davis wrote:
> That example was just for illustration. My other example didn't require
> creating a table at all:
>
> SELECT a=b FROM (SELECT ''::text, ' ') x(a,b);
>
> it's f
ion so that it doesn't need to be done for every
input tuple of HashAgg.
Thoughts?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
one
context per group is essentially a bug, so we don't need to optimize for
that case.
Your approach may be better, though.
Thank you for reviewing. I'll update my patches and resubmit for the
next CF.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, 2015-02-22 at 00:07 -0500, Tom Lane wrote:
> If you want to have just *one* variable but change its name and type,
> I'd be ok with that.
Thank you for taking a quick look. Committed as a simple rename from
"context" to "set".
Regards,
Jeff Davis
ould that happen when the result of array_agg() is passed
> to the COUNT()? Also, how could that allocate huge amounts of memory and
> get killed by OOM, which happens easily with this query?
Oops, I misread that as "COUNT(*)". Count(x) will force array_agg() to
be executed.
Regar
commit the cleanup at least.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 438,451 AllocSetContextCreate(MemoryContext parent,
Size initBlockSize,
Size maxBlockSize)
{
! AllocSet context;
/* D
e 1M tuples
will run out of memory for small groups without the patch.
Committed.
Regards,
Jeff Davis
array_agg_test.sql
Description: application/sql
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, 2015-02-07 at 16:08 -0800, Jeff Davis wrote:
> I believe Inclusion Constraints will be important for postgres.
I forgot to credit Darren Duncan with the name of this feature:
http://www.postgresql.org/message-id/4f8bb9b0.5090...@darrenduncan.net
Regards,
Jeff Davis
--
S
rk
- catalog work
- dump/reload support
- compare performance of a trivial inclusion constraint to a FK
- ensure that deadlocks are not too common
- add tests
Any takers?
Regards,
Jeff Davis
*** a/doc/src/sgml/ref/create_table.sgml
--- b/doc/src/sgml/ref/create_table.sgml
**
time we want in the future, but
> it's impossible to "unbreak it" ;-)
We can't break the old API, and I'm not suggesting that we do. I was
hoping to find some alternative.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane wrote:
> Jeff Davis writes:
>> Tom (tgl),
>> Is my reasoning above acceptable?
>
> Uh, sorry, I've not been paying any attention to this thread for awhile.
> What's the remaining questions at issue?
This patch is tr
On Sun, Dec 28, 2014 at 11:53 PM, Jeff Davis wrote:
> On Tue, 2014-04-01 at 13:08 -0400, Tom Lane wrote:
>> I think a patch that stood a chance of getting committed would need to
>> detect whether the aggregate was being called in simple or grouped
>> contexts, and apply d
n the original accumArrayResult(). That cure might be worse than the
disease though.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> reduce the volume, you could just compress the whole WAL stream.
Was this point addressed? How much benefit is there to compressing the
data before it goes into the WAL stream versus after?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
T
see is from reducing the initial allocation from 64 to some lower
number. But if we're doubling each time, it won't take long to get
there; and because it's the simple context, we only need to do it once.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-
ot of
third-party code? We might want to provide new functions to avoid a
breaking change.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ot;shared memory context", because
> it sounds too much like it means "a context in shared memory". I see
> that the patch itself doesn't use that phrase, which is good, but can
> we come up with some other phrase for talking about it?
>
"Common memory
On Sun, 2014-12-28 at 12:37 -0800, Jeff Davis wrote:
> I feel like I made a mistake -- can someone please do a
> sanity check on my numbers?
I forgot to randomize the inputs, which doesn't matter much for hashagg
but does matter for sort. New data script attached. The results are e
On Thu, 2014-12-11 at 02:46 -0800, Jeff Davis wrote:
> On Sun, 2014-08-10 at 14:26 -0700, Jeff Davis wrote:
> > This patch is requires the Memory Accounting patch, or something similar
> > to track memory usage.
> >
> > The attached patch enables hashagg to spil
On Tue, 2014-12-23 at 01:16 -0800, Jeff Davis wrote:
> New patch attached (rebased, as well).
>
> I also see your other message about adding regression testing. I'm
> hesitant to slow down the tests for everyone to run through this code
> path though. Should I add regres
sts for everyone to run through this code
path though. Should I add regression tests, and then remove them later
after we're more comfortable that it works?
Regards
Jeff Davis
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***
*** 304
On Sun, 2014-08-10 at 14:26 -0700, Jeff Davis wrote:
> This patch is requires the Memory Accounting patch, or something similar
> to track memory usage.
>
> The attached patch enables hashagg to spill to disk, which means that
> hashagg will contain itself to work_mem even if the
On Sun, 2014-11-30 at 17:49 -0800, Peter Geoghegan wrote:
> On Mon, Nov 17, 2014 at 11:39 PM, Jeff Davis wrote:
> > I can also just move isReset there, and keep mem_allocated as a uint64.
> > That way, if I find later that I want to track the aggregated value for
> > the chil
cessShare lock, and A2 tries to acquire an exclusive
lock. B is waiting on A2. That's still a deadlock, right?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
argue that we know we're headed for this problem, and
therefore we should solve it now. I disagree. You are assuming that
sharing exclusive heavyweight locks among a group will be a fundamental
part of everything postgres does with parallelism; but not every design
requires it.
Regards,
d I think we'll make a better decision about the exact form it
takes.
In other words: lock groups is important, but I don't see the rush for
lock sharing specifically.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ve isReset there, and keep mem_allocated as a uint64.
That way, if I find later that I want to track the aggregated value for
the child contexts as well, I can split it into two uint32s. I'll hold
off any any such optimizations until I see some numbers from HashAgg
though.
Attached new versio
which would be
one less context to visit. And if those don't work, perhaps I could
resort to a sampling method of some kind, as you allude to above.
Regards,
Jeff Davis
[1] I'm fairly sure I tested something very similar on Robert's POWER
machine a while ago, and it was
On Thu, Nov 13, 2014 at 11:26 AM, Robert Haas wrote:
>
> On Thu, Nov 13, 2014 at 3:38 AM, Jeff Davis wrote:
> > If two backends both have an exclusive lock on the relation for a join
> > operation, that implies that they need to do their own synchronization,
> > be
Exclusion Constraints, I went to a lot of effort
to make deadlocks impossible, and was quite proud. When Tom saw it, he
told me not to bother, and to do it the simple way instead, because
deadlocks can happen even with UNIQUE constraints (which I didn't even
know).
We should use the sam
, it would see:
(A1 A2) -> B -> (A1 A2)
which is a cycle, and can be detected regardless of the synchronization
method used between A1 and A2. There are some details to work out to
avoid false positives, of course.
Is that about right?
Regards,
Jeff Davis
--
Sent via pgsql-hackers m
tly with an extra field in the lock tag,
and perhaps some catalog knowledge?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
exts to check whether the memory limit has been exceeded.
As Tomas pointed out, that could be a lot of work in the case of
array_agg with many groups.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 438,451 Alloc
Tomas), but I
have a bit more microoptimization and testing to do. I'll mark it
"returned with feedback" for now, though if I find the time I'll do more
testing to see if the performance concerns are fully addressed.
Regards,
Jeff Davis
--
Sent via pgsql-hackers maili
On Tue, 2014-08-26 at 22:13 -0700, Jeff Davis wrote:
> Attached a patch implementing the same idea though: only use the
> multibyte path if *both* the escape char and the current character from
> the pattern are multibyte.
Forgot to mention: with this patch, the test completes in about
would have
looked at xip). Would more discussion help here or do we need to wait
for performance numbers?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ged the comment to more clearly state the behavior upon which
we're relying. I hope what I said is accurate.
Regards,
Jeff Davis
*** a/src/backend/utils/adt/regexp.c
--- b/src/backend/utils/adt/regexp.c
***
*** 688,698 similar_escape(PG_FUNCTION_ARGS)
elen = VARS
you can read data directly out of the heap page and use it
> without doing any additional I/O.
If the data is that static, then the visibility information would be
highly compressible, and surely in shared_buffers already.
(Yes, it would need to be pinned, which has a cost.)
Regards,
Jef
needs to address some
performance issues, but there's a chance of wrapping those up quickly.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
still need
to do a lookup, but no locks or contention). There would be some
challenges around invalidation (for xid wraparound) and pre-warming the
cache (so establishing a lot of connections doesn't cause a lot of CLOG
access).
Regards,
Jeff Davis
--
Sent via pgsql-hackers ma
snapshot->snapshotlsn < LsnMax);
There would need to be some handling for locked tuples, or tuples
related to the current transaction, of course. But I still think it
would turn out simpler; perhaps by enough to save a few cycles.
Regards,
Jeff Davis
--
Sent via pgsql-
seem to be useful at all).
I'm not complaining, and I hope this is not a showstopper for this
patch, but I think it's worth discussing.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
only have about 1/P chance of saving the skew group
(where P is the ultimate number of partitions). With my approach, we'd
always keep the skew group in memory (unless we're very unlucky, and the
hash table fills up before we even see the skew value).
Regards,
Jeff Davis
eproduce the regression with the old patch, but the results were still
noisy.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 242,247 typedef struct AllocChunkData
--- 242,249
#define AllocChunkGetPointer(chk) \
with the idea of tracking space for an entire hierarchy.
Also, as I pointed out in my reply to Robert, adding too many fields to
MemoryContextData may be the cause of the regression. Your idea requires
only one field, which doesn't show the same regression in my tests.
Regards,
Jeff
s like it might be an important threshold.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ch is (potentially)
commit-worthy, and your statement that it (potentially) solves a real
problem is a big help.
Regards,
Jeff Davis
[1]
http://blogs.msdn.com/b/craigfr/archive/2008/01/18/partial-aggregation.aspx
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
e hash table fills up, in which case
HashAgg is just as bad as Sort.)
That being said, we can hold out for an array_agg fix if desired. As I
pointed out in another email, my proposal is compatible with the idea of
dumping groups out of the hash table, and does take some steps in that
directio
1 - 100 of 1617 matches
Mail list logo