There is a hook post_parse_analyze_hook but I think it comes too
late as it comes after the analyze step which is when Postgres looks
up the schema information for every relation mentioned in the query.
What you would need is a post_parse_hook which would work on the raw
parse tree before the
2014.02.15. dátummal, 0:46 időpontban Greg Stark st...@mit.edu írta:
On Fri, Feb 14, 2014 at 9:16 PM, David Beck db...@starschema.net wrote:
Another point I liked in mysql is the possibility to write info schema
plugins:
On Sat, Feb 15, 2014 at 2:06 PM, David Beck db...@starschema.net wrote:
- when the query arrives a smart rewrite would know 1) what tables are local
2) what tables need new catalog entries 3) what can be joined on the other
side
- the rewriter would potentially add SQL statements in the
Thanks for the reply. There are two things I think I’ve been misunderstood:
1, the point is to do the rewrite without and before catalog access
2, I do want to push the join to the source and equally important pushing the
where conditions there
Best regards, David
2014.02.13. dátummal, 21:22
Let me rephrase this:
Let’s remove my motivations and use cases from this conversation….
Why is that a bad idea of rewriting the query before it reaches
transform/analyze (without ever accessing the catalog)?
If that flexibility is acceptable to you, where would be the best place to put
it
On Fri, Feb 14, 2014 at 2:28 PM, David Beck db...@starschema.net wrote:
Why is that a bad idea of rewriting the query before it reaches
transform/analyze (without ever accessing the catalog)?
If that flexibility is acceptable to you, where would be the best place to
put it in?
Well if
I think I’m gonna need to dig into the planner to fully understand your points.
Thank you for the insights. I was more into putting the knowledge of the legacy
system into the an extension and my codebase. Now I see better use of the
planner would help. Thank you.
What inspired me is the
On Fri, Feb 14, 2014 at 9:16 PM, David Beck db...@starschema.net wrote:
Another point I liked in mysql is the possibility to write info schema
plugins:
http://dev.mysql.com/doc/refman/5.1/en/writing-information-schema-plugins.html
Like a virtual catalog. Is there anything similar in
Greg Stark st...@mit.edu writes:
On Fri, Feb 14, 2014 at 9:16 PM, David Beck db...@starschema.net wrote:
Another point I liked in mysql is the possibility to write info schema
plugins:
http://dev.mysql.com/doc/refman/5.1/en/writing-information-schema-plugins.html
Like a virtual catalog. Is
On Fri, Feb 14, 2014 at 9:16 PM, David Beck db...@starschema.net wrote:
What inspired me is the scriptable query rewrite in
http://dev.mysql.com/downloads/mysql-proxy/
The hook I proposed would be a lot nicer in Postgres because the raw parsing
is already done at this point while in
Hello Hackers,
I work on a foreign data wrapper for a legacy system. I generally find the hook
system very useful and flexible way to extend Postgres.
The post parse analyze hook almost fits what I need, but I have a few use cases
where I would need to tap right into the parsed queries but
See the discussion of Custom-Scan API.
https://commitfest.postgresql.org/action/patch_view?id=1282
I believe my third patch is what you really want to do...
This rewritten query would be handled by the FDW table that I previously
added to the catalog.
The reason I want this new hook is that
Thanks for the link.
I want flexibility. Here is a situation: my hook knows the size of tableA and
tableB on the legacy side. It should be able to decide wether to offload the
join/filter onto the legacy side or not. At the same time it can start
transferring the data to real Postgres tables
David Beck db...@starschema.net writes:
I have table like data structures in the source system for the FDW I work on.
These tables are sometimes too big and the source system is able to filter
and join them with limitations, thus it is not optimal to transfer the data
to Postgres.
At the
14 matches
Mail list logo