To provide more context, in cstore_fdw creating the storage is easy, we
only need to hook into CREATE FOREIGN TABLE using event triggers. Removing
the storage is not that easy, for DROP FOREIGN TABLE we can use event
triggers. But when we do DROP EXTENSION, the event triggers don't get fired
On Wed, Sep 13, 2017 at 12:12 AM, Michael Paquier wrote:
>
> Foreign tables do not have physical storage assigned to by default. At
> least heap_create() tells so, create_storage being set to false for a
> foreign table. So there is nothing to clean up normally. Or is
Motivation for this patch is that some FDWs (notably, cstore_fdw) try
utilizing PostgreSQL internal storage. PostgreSQL assigns relfilenode's to
foreign tables, but doesn't clean up storage for foreign tables when
dropping tables. Therefore, in cstore_fdw we have to do some tricks to
handle
Hello Hackers,
The attached patch moves declarations of
ExplainOpenGroup()/ExplainCloseGroup() from explain.c to explain.h.
This can be useful for extensions that need explain groups in their
custom-scan explain output.
For example, Citus uses groups in its custom explain outputs [1]. But it
Title should have been "Make ExplainOpenGroup()/ExplainCloseGroup()
public.".
Sorry for the misspell.
Hello,
The attached patch moves declarations of
ExplainBeginGroup()/ExplainEndGroup() from explain.c to explain.h.
This can be useful for extensions that need explain groups in their
custom-scan explain output.
For example, Citus uses groups in its custom explain outputs [1]. But it
achieves it
Hello,
The attached patch improves the performance of array_length() by detoasting
only the overhead part of the datum.
Here is a test case:
postgres=# create table array_length_test as select array_agg(a) a from
generate_series(1, 1) a, generate_series(1, 1) b group by b;
Without the
Thanks for looking into this.
With that in mind, I was surprised that your test case showed any
improvement at all --- it looks like the arrays aren't getting compressed
for some reason.
You are right, it seems that they were not getting compressed, probably
because the arrays were seq
Hello Dilip,
Query: select count(*) from t1,t2 where t1.bt2.b and t1.b 12000;
Test Result:
Nest Loop Join with Index Scan : 1653.506 ms
Sort Merge Join for (seq scan) : 610.257ms
This looks like a great improvement. Repeating Nicolas's
On second thought I noticed that that makes CREATE FOREIGN TABLE include
an OID column in newly-created foreign tables wrongly, when the
default_with_oids parameter is set to on. Please find attached a patch.
The fix makes sense to me, since in ALTER TABLE SET WITH OIDS we check that
the
Hi Stefan,
On Tue, Apr 8, 2014 at 9:28 AM, Stefan Keller sfkel...@gmail.com wrote:
Hi Hadi
Do you think that cstore_fd*w* is also welll suited for storing and
retrieving linked data (RDF)?
I am not very familiar with RDF. Note that cstore_fdw doesn't change the
query language of
Dear Hackers,
We at Citus Data have been developing a columnar store extension for
PostgreSQL. Today we are excited to open source it under the Apache v2.0
license.
This columnar store extension uses the Optimized Row Columnar (ORC) format
for its data layout, which improves upon the RCFile
Dear Hackers,
We at Citus Data have been developing a columnar store extension for
PostgreSQL. Today we are excited to open source it under the Apache v2.0
license.
This columnar store extension uses the Optimized Row Columnar (ORC) format
for its data layout, which improves upon the RCFile
Hello,
The comments in pg_lzcompress.c say that:
* If VARSIZE(x) == rawsize + sizeof(PGLZ_Header), then the data
* is stored uncompressed as plain bytes. Thus, the decompressor
* simply copies rawsize bytes from the location after the
* header to the destination.
But pg_lzdecompress doesn't
Hello,
There is a callback function in fdw's which should also set estimates for
startup and total costs for each path. Assume a fdw adds only one path
(e.g. in file_fdw). I am trying to understand what can go wrong if we do a
bad job in estimating these costs.
Since we have only one scan path
Hello,
int, float, double 26829 ms (26675 ms) -- 0.5% slower .. statistic error ..
cleaner code
numeric sum 6490 ms (7224 ms) -- 10% faster
numeric avg 6487 ms (12023 ms) -- 46% faster
I also got very similar results.
On the other hand, initially I was receiving sigsegv's whenever I
wanted
Hi Tom,
Tom Lane t...@sss.pgh.pa.us wrote:
After thinking about that for awhile: if we pursue this type of
optimization, what would probably be appropriate is to add an aggregate
property (stored in pg_aggregate) that allows direct specification of
the size that the planner should assume for
now, because there are
more and more warehouses, where CPU is botleneck.
Regards
Pavel
2013/3/18 Hadi Moshayedi h...@moshayedi.net:
Hi Pavel,
Thanks a lot for your feedback.
I'll work more on this patch this week, and will send a more complete
patch
later this week.
I'll also
writes:
Hadi Moshayedi h...@moshayedi.net wrote:
I also noticed that this patch makes matview test fail. It seems
that it just changes the ordering of rows for queries like
SELECT * FROM tv;. Does this seem like a bug in my patch, or
should we add ORDER BY clauses to this test to make
...@gmail.comwrote:
2013/3/16 Hadi Moshayedi h...@moshayedi.net:
Revisiting:
http://www.postgresql.org/message-id/45661be7.4050...@paradise.net.nz
I think the reasons which the numeric average was slow were:
(1) Using Numeric for count, which is slower than int8 to increment,
(2
Revisiting:
http://www.postgresql.org/message-id/45661be7.4050...@paradise.net.nz
I think the reasons which the numeric average was slow were:
(1) Using Numeric for count, which is slower than int8 to increment,
(2) Constructing/deconstructing arrays at each transition step.
This is also
21 matches
Mail list logo