On Thu, Aug 31, 2017 at 1:52 AM, Jeff Davis <pg...@j-davis.com> wrote:
> Updated patch attached. Changelog:
>
> * Rebased
> * Changed MJCompare to return an enum as suggested, but it has 4
> possible values rather than 3.
> * Added support for joining
return an enum as suggested, but it has 4
possible values rather than 3.
* Added support for joining on contains or contained by (@> or <@) and
updated tests.
Regards,
Jeff Davis
diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml
index 9557c16..84578a7 100644
*** a/doc/s
").
* Better integration with the catalog so that users could add their
own types that support range merge join.
Thank you for the review.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> reasons to do the latter.
Once we support more pushdowns to partitions, the only question is:
what are your join keys and what are your grouping keys?
Text is absolutely a normal join key or group key. Consider joins on a
user ID or grouping by a model number.
Regards,
Jeff Davis
making any changes.
I am fine with either option.
> 2. Add an option like --dump-partition-data-with-parent. I'm not sure
> who originally proposed this, but it seems that everybody likes it.
> What we disagree about is the degree to which it's sufficient. Jeff
> Davis thinks it do
the typical cases work out of the box. I'm fine with
it as long as we don't paint ourselves into a corner.
Of course we still have work to do on the hash functions. We should solve
at least the most glaring portability problems, and try to harmonize the
hash opfamilies. If you agree, I can put together a patch or two.
Regards,
Jeff Davis
could be convinced.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
It doesn't mean that we should necessarily forbid them, but it should
make us question whether combining range and hash partitions is really
the right design.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
y as we do TOAST?
That should take care of the naming problem.
> If Java has portable hash functions, why can't we?
Java standardizes on a particular unicode encoding (utf-16). Are you
suggesting that we do the same? Or is there another solution that I am
missing?
Regards,
Jeff Davis
surprised, I think
users will understand why these aren't quite the same concepts.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
res and
others suggested, and disable a lot of logical partitioning
capabilities. I'd be a little worried about what we do with
attaching/detaching, though.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e root. Or maybe hash partitions are really a
"semi-logical" partitioning that the optimizer understands, but where
things like per-partition check constraints don't make sense.
Regards,
Jeff Davis
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgres
g/message-id/CAMp0ubfNMSGRvZh7N7TRzHHN5tz0ZeFP13Aq3sv6b0H37fdcPg%40mail.gmail.com
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
acuum) but not always desirable for parallelism.
Hash partitioning doesn't have these issues and goes very nicely with
parallel query.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
But hash partitioning is too valuable to give up on entirely. I think
we should consider supporting a limited subset of types for now with
something not based on the hash am.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make chan
just salt the hash function
differently and get more hash bits. This is not urgent and I believe
we should just implement salts when and if some algorithm needs them.
Regards,
Jeff Davis
[1] You can a kind of mirroring in the hash outputs indicating bad mixing:
postgres=# select hashint8((2
On Tue, May 2, 2017 at 7:01 PM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, May 2, 2017 at 9:01 PM, Jeff Davis <pg...@j-davis.com> wrote:
>> 1. Consider a partition-wise join of two hash-partitioned tables. If
>> that's a hash join, and we just use the hash opcl
sh-downs, which
will be good for parallel query.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
rror: ‘op_strategy’ undeclared (first use in this
> function)
>_strategy,
>
Looks like filterdiff destroyed my patch from git. Attaching unified
version against master 3820c63d.
Thanks!
Jeff Davis
diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml
index 9557c1
tgreSQL, and B-trees are faster.
I don't quite follow. I don't think any of these proposals uses btree,
right? Range merge join doesn't need any index, your proposal uses
gist, and PgSphere's crossmatch uses gist.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
On Tue, Apr 11, 2017 at 8:35 AM, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:
> On Tue, Apr 11, 2017 at 5:46 PM, Jeff Davis <pg...@j-davis.com> wrote:
>> Do you have a sense of how this might compare with range merge join?
>
>
> If you have GiST indexes over
ode
>
> You also can find some experimental evaluation here:
> http://www.adass2016.inaf.it/images/presentations/10_Korotkov.pdf
Do you have a sense of how this might compare with range merge join?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
On Tue, Apr 11, 2017 at 12:17 AM, Jeff Davis <pg...@j-davis.com> wrote:
> Version 2 attached. Fixed a few issues, expanded tests, added docs.
It looks like the CF app only listed my perf test script. Re-attaching
rangejoin-v2.patch so that it appears in the CF app. Identical to
other ran
if
the input relations are subqueries.
Regards,
Jeff Davis
On Thu, Apr 6, 2017 at 1:43 AM, Jeff Davis <pg...@j-davis.com> wrote:
>
> Example:
>
>
> Find different people using the same website at the same time:
>
> create table session(sessionid
stigation might be useful here). Also, it doesn't provide any
alternative to the nestloop-with-inner-index we already offer at the
leaf level today.
Regards,
Jeff Davis
diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
index a18ab43..1110c1e 100644
*** a/src/backend
the assembly we wanted
rather than how much it actually improved performance. Can someone
please point me to the numbers? Do they refute the conclusions in the
paper, or are we concerned about a wider range of processors?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers
On Sun, Jan 22, 2017 at 10:32 PM, Jeff Davis <pg...@j-davis.com> wrote:
> On Sat, Jan 21, 2017 at 4:25 AM, Andrew Borodin <boro...@octonica.com> wrote:
> One idea I had that might be simpler is to use a two-stage page
> delete. The first stage would remove the link fro
e all references to the page and recycle it in a new place in the
tree.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
r). I don't see a problem in your patch, but again, we are
breaking an assumption that future developers might make.
Your patch solves a real problem (a 90-second stall is clearly not
good) and I don't want to dismiss that. But I'd like to consider some
alternatives that may not have these downsides.
Reg
vacuumed. I think this would give good concurrency even for K=2.
I just had this idea now, so I didn't think it through very well.
What do you think?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ee, or leftmost of the subtree that
you are removing pages from?
* In order to keep out concurrent reads, you need to lock/unlock the
left page while holding exclusive lock on the page being deleted, but
I didn't see how that happens exactly in the code. Where does that
happen?
Regards,
Jeff Davi
would if we allowed them the option.
Perhaps I didn't understand your point?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
n CREATE FUNCTION.
I think the execution is pretty good, except that (a) we need to keep
the state in fn_extra rather than the winstate; and (b) we should get
rid of the bitmaps and just do a naive scan unless we really think
non-constant offsets will be important. We can always optimize more
la
e a quick look.
Regards,
Jeff Davis
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
***
*** 143,152 static int errordata_stack_depth = -1; /* index of topmost active frame */
static int recursion_depth = 0; /* to detect actual recu
On Mon, 2015-09-07 at 17:47 -0300, Alvaro Herrera wrote:
> Jeff Davis wrote:
> > On Sun, 2015-03-22 at 19:47 +0100, Andres Freund wrote:
> > > On 2015-03-22 00:47:12 +0100, Tomas Vondra wrote:
> > > > from time to time I need to correlate PostgreSQL logs to other lo
) and
once for the regular log ('m', 'n', or 't'). If the regular log uses
'm', that would be some wasted cycles formatting it the same way twice.
Is it worth a little extra ugliness to cache both the timeval and the
formatted string?
Regards,
Jeff Davis
[1] As of minutes ago, after I missed
g is free, the cost seems very low, and at least
three people have expressed interest in this patch.
What tips the balance is that we expose the unix epoch in the pgbench
logs, as Tomas points out.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresq
On Fri, 2015-07-17 at 15:52 +1200, David Rowley wrote:
Should we mark the patch as returned with feedback in the commitfest
app then?
I believe the memory accounting patch has been rejected. Instead, the
work will be done in the HashAgg patch.
Thank you for the review!
Regards,
Jeff
palloc'd chunk. But since we can't
spill those aggregates to disk *anyway*, that doesn't really matter.
So would it be acceptable to just ignore the memory consumed by
internal, or come up with some heuristic?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers
patch, with a
heuristic for internal types.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
noisy differences in CPU
timings can appear for almost any tweak to the code, and aren't
necessarily cause for major concern.
Regards,
Jeff Davis
[1] pgbench -i -s 300, then do the following 3 times each for master,
v11, and v12, and take the median of logged traces:
start server; set
a nice solution. It would require more space in MemoryContextData,
but that might not be a problem.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
(if it does, let me know).
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
to me. For instance, reading from the set of queues
in a round-robin fashion is part of the Funnel algorithm, and doesn't
seem suitable for a generic tuple communication mechanism (that would
never allow order-sensitive reading, for example).
Regards,
Jeff Davis
--
Sent via pgsql-hackers
for the file, so something seems off. Either we should
explicitly make it the supporting routines for the Funnel operator, or
we should try to generalize it a little.
I still have quite a bit to look at, but this is a start.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql
,tqueue.c, etc and all other generic
(non-nodes specific) code.
Did you consider passing tuples through the tqueue by reference rather
than copying? The page should be pinned by the worker process, but
perhaps that's a bad assumption to make?
Regards,
Jeff Davis
--
Sent via pgsql-hackers
machine.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
also (I hope) be convenient for Simon and David Rowley, who
have been hacking on aggregates in general.
Anyone see a reason I shouldn't give this a try?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 500,505 AllocSetContextCreate(MemoryContext parent,
--- 500,508
errdetail(Failed while creating memory context \%s\.,
name)));
}
+
+ ((MemoryContext) set
as a major unsolved
issue?
If I recall, you were concerned about things like array_agg, where an
individual state could get larger than work_mem. That's a valid concern,
but it's not the problem I was trying to solve.
Regards,
Jeff Davis
[1]
http://www.postgresql.org/message-id
if it's going to
fail.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
literals and unknown type literals
seems promising concept to aid in understanding the difference in the
face of not being able (or wanting) to actually change the behavior.
Not sure I understand that proposal, can you elaborate?
Regards,
Jeff Davis
--
Sent via pgsql-hackers
Moving thread to -hackers.
On Wed, Apr 8, 2015 at 11:18 PM, Jeff Davis pg...@j-davis.com wrote:
That example was just for illustration. My other example didn't require
creating a table at all:
SELECT a=b FROM (SELECT ''::text, ' ') x(a,b);
it's fine with me if we want that to fail, but I
need to be done for every
input tuple of HashAgg.
Thoughts?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
for the
next CF.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
the cleanup at least.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 438,451 AllocSetContextCreate(MemoryContext parent,
Size initBlockSize,
Size maxBlockSize)
{
! AllocSet context;
/* Do the type
On Sun, 2015-02-22 at 00:07 -0500, Tom Lane wrote:
If you want to have just *one* variable but change its name and type,
I'd be ok with that.
Thank you for taking a quick look. Committed as a simple rename from
context to set.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing
to the COUNT()? Also, how could that allocate huge amounts of memory and
get killed by OOM, which happens easily with this query?
Oops, I misread that as COUNT(*). Count(x) will force array_agg() to
be executed.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers
,
Jeff Davis
array_agg_test.sql
Description: application/sql
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, 2015-02-07 at 16:08 -0800, Jeff Davis wrote:
I believe Inclusion Constraints will be important for postgres.
I forgot to credit Darren Duncan with the name of this feature:
http://www.postgresql.org/message-id/4f8bb9b0.5090...@darrenduncan.net
Regards,
Jeff Davis
--
Sent
that deadlocks are not too common
- add tests
Any takers?
Regards,
Jeff Davis
*** a/doc/src/sgml/ref/create_table.sgml
--- b/doc/src/sgml/ref/create_table.sgml
***
*** 63,69 CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
PRIMARY
break the old API, and I'm not suggesting that we do. I was
hoping to find some alternative.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, Jan 20, 2015 at 6:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Jeff Davis pg...@j-davis.com writes:
Tom (tgl),
Is my reasoning above acceptable?
Uh, sorry, I've not been paying any attention to this thread for awhile.
What's the remaining questions at issue?
This patch is trying
be worse than the
disease though.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sun, Dec 28, 2014 at 11:53 PM, Jeff Davis pg...@j-davis.com wrote:
On Tue, 2014-04-01 at 13:08 -0400, Tom Lane wrote:
I think a patch that stood a chance of getting committed would need to
detect whether the aggregate was being called in simple or grouped
contexts, and apply different
, you could just compress the whole WAL stream.
Was this point addressed? How much benefit is there to compressing the
data before it goes into the WAL stream versus after?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
On Tue, 2014-12-23 at 01:16 -0800, Jeff Davis wrote:
New patch attached (rebased, as well).
I also see your other message about adding regression testing. I'm
hesitant to slow down the tests for everyone to run through this code
path though. Should I add regression tests, and then remove
On Thu, 2014-12-11 at 02:46 -0800, Jeff Davis wrote:
On Sun, 2014-08-10 at 14:26 -0700, Jeff Davis wrote:
This patch is requires the Memory Accounting patch, or something similar
to track memory usage.
The attached patch enables hashagg to spill to disk, which means that
hashagg
On Sun, 2014-12-28 at 12:37 -0800, Jeff Davis wrote:
I feel like I made a mistake -- can someone please do a
sanity check on my numbers?
I forgot to randomize the inputs, which doesn't matter much for hashagg
but does matter for sort. New data script attached. The results are even
*better
it means a context in shared memory. I see
that the patch itself doesn't use that phrase, which is good, but can
we come up with some other phrase for talking about it?
Common memory context?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
allocation from 64 to some lower
number. But if we're doubling each time, it won't take long to get
there; and because it's the simple context, we only need to do it once.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
though. Should I add regression tests, and then remove them later
after we're more comfortable that it works?
Regards
Jeff Davis
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***
*** 3045,3050 include_dir 'conf.d'
--- 3045,3065
/listitem
On Sun, 2014-08-10 at 14:26 -0700, Jeff Davis wrote:
This patch is requires the Memory Accounting patch, or something similar
to track memory usage.
The attached patch enables hashagg to spill to disk, which means that
hashagg will contain itself to work_mem even if the planner makes a
bad
On Sun, 2014-11-30 at 17:49 -0800, Peter Geoghegan wrote:
On Mon, Nov 17, 2014 at 11:39 PM, Jeff Davis pg...@j-davis.com wrote:
I can also just move isReset there, and keep mem_allocated as a uint64.
That way, if I find later that I want to track the aggregated value for
the child contexts
it now. I disagree. You are assuming that
sharing exclusive heavyweight locks among a group will be a fundamental
part of everything postgres does with parallelism; but not every design
requires it.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
is waiting on A2. That's still a deadlock, right?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
.
That way, if I find later that I want to track the aggregated value for
the child contexts as well, I can split it into two uint32s. I'll hold
off any any such optimizations until I see some numbers from HashAgg
though.
Attached new version.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr
the same strategy here. When we see deadlocks becoming a
problem for any reasonable workload, we make a series of tweaks (perhaps
some to the lock manager itself) to reduce them.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
On Thu, Nov 13, 2014 at 11:26 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Nov 13, 2014 at 3:38 AM, Jeff Davis pg...@j-davis.com wrote:
If two backends both have an exclusive lock on the relation for a join
operation, that implies that they need to do their own synchronization
between A1 and A2. There are some details to work out to
avoid false positives, of course.
Is that about right?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
some catalog knowledge?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
out, that could be a lot of work in the case of
array_agg with many groups.
Regards,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 438,451 AllocSetContextCreate(MemoryContext parent,
Size initBlockSize,
Size
more microoptimization and testing to do. I'll mark it
returned with feedback for now, though if I find the time I'll do more
testing to see if the performance concerns are fully addressed.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
this is not a showstopper for this
patch, but I think it's worth discussing.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
handling for locked tuples, or tuples
related to the current transaction, of course. But I still think it
would turn out simpler; perhaps by enough to save a few cycles.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
-warming the
cache (so establishing a lot of connections doesn't cause a lot of CLOG
access).
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
to address some
performance issues, but there's a chance of wrapping those up quickly.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
out of the heap page and use it
without doing any additional I/O.
If the data is that static, then the visibility information would be
highly compressible, and surely in shared_buffers already.
(Yes, it would need to be pinned, which has a cost.)
Regards,
Jeff Davis
--
Sent via
relying. I hope what I said is accurate.
Regards,
Jeff Davis
*** a/src/backend/utils/adt/regexp.c
--- b/src/backend/utils/adt/regexp.c
***
*** 688,698 similar_escape(PG_FUNCTION_ARGS)
elen = VARSIZE_ANY_EXHDR(esc_text);
if (elen == 0)
e = NULL; /* no escape
help here or do we need to wait
for performance numbers?
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Tue, 2014-08-26 at 22:13 -0700, Jeff Davis wrote:
Attached a patch implementing the same idea though: only use the
multibyte path if *both* the escape char and the current character from
the pattern are multibyte.
Forgot to mention: with this patch, the test completes in about 720ms,
so
, we'd
always keep the skew group in memory (unless we're very unlucky, and the
hash table fills up before we even see the skew value).
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
,
Jeff Davis
*** a/src/backend/utils/mmgr/aset.c
--- b/src/backend/utils/mmgr/aset.c
***
*** 242,247 typedef struct AllocChunkData
--- 242,249
#define AllocChunkGetPointer(chk) \
((AllocPointer)(((char *)(chk)) + ALLOC_CHUNKHDRSZ))
+ static void update_allocation
) solves a real
problem is a big help.
Regards,
Jeff Davis
[1]
http://blogs.msdn.com/b/craigfr/archive/2008/01/18/partial-aggregation.aspx
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
it might be an important threshold.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
of tracking space for an entire hierarchy.
Also, as I pointed out in my reply to Robert, adding too many fields to
MemoryContextData may be the cause of the regression. Your idea requires
only one field, which doesn't show the same regression in my tests.
Regards,
Jeff Davis
--
Sent via
, and then allowing
each work item to create it's own set of additional partitions effectively
renders the HASH_DISK_MAX_PARTITIONS futile.
It's the number of active partitions that matter, because that's what
causes the random I/O.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql
to that partition. I don't see that there's any special case
here.
HashJoin only deals with tuples. With HashAgg, you have to deal with a
mix of tuples and partially-computed aggregate state values. Not
impossible, but it is a little more awkward than HashJoin.
Regards,
Jeff Davis
--
Sent
to test your approach because
we'd actually need a way to write out the partially-computed state, and
the algorithm itself seems a little more complex. So I'm not really sure
how to proceed.
Regards,
Jeff Davis
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make
1 - 100 of 1608 matches
Mail list logo