On 05/11/2017 10:52 PM, Andres Freund wrote:
On 2017-05-11 22:48:26 +0300, Konstantin Knizhnik wrote:
On 05/11/2017 09:31 PM, Tom Lane wrote:
Bruce Momjian <br...@momjian.us> writes:
Good point.  I think we need to do some measurements to see if the
parser-only stage is actually significant.  I have a hunch that
commercial databases have much heavier parsers than we do.
FWIW, gram.y does show up as significant in many of the profiles I take.
I speculate that this is not so much that it eats many CPU cycles, as that
the constant tables are so large as to incur lots of cache misses.  scan.l
is not quite as big a deal for some reason, even though it's also large.

                        regards, tom lane
Yes, my results shows that pg_parse_query adds not so much overhead:
206k TPS for my first variant with string literal substitution and modified 
query text used as hash key vs.
181k. TPS for version with patching raw parse tree constructed by 
Those numbers and your statement seem to contradict each other?

Oops, my parse error:( I incorrectly read Tom's statement.
Actually, I also was afraid that price of parsing is large enough and this is 
why my first attempt was to avoid parsing.
But then I find out that most of the time is spent in analyze and planning (see 
attached profile):

pg_parse_query: 4.23%
pg_analyze_and_rewrite 8.45%
pg_plan_queries: 15.49%

- Andres

Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to