Re: benchmarking Flex practices

2020-01-13 Thread John Naylor
On Tue, Jan 14, 2020 at 4:12 AM Tom Lane  wrote:
>
> John Naylor  writes:
> > [ v11 patch ]
>
> I pushed this with some small cosmetic adjustments.

Thanks for your help hacking on the token filter.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: benchmarking Flex practices

2020-01-13 Thread Tom Lane
John Naylor  writes:
> [ v11 patch ]

I pushed this with some small cosmetic adjustments.

One non-cosmetic adjustment I experimented with was to change
str_udeescape() to overwrite the source string in-place, since
we know that's modifiable storage and de-escaping can't make
the string longer.  I reasoned that saving a palloc() might help
reduce the extra cost of UESCAPE processing.  It didn't seem to
move the needle much though, so I didn't commit it that way.
A positive reason to keep the API as it stands is that if we
do something about the idea of allowing Unicode strings in
non-UTF8 backend encodings, that'd likely break the assumption
about how the string can't get longer.

I'm about to go off and look at the non-UTF8 idea, btw.

regards, tom lane




Re: benchmarking Flex practices

2020-01-13 Thread John Naylor
On Mon, Jan 13, 2020 at 7:57 AM Tom Lane  wrote:
>
> Hmm ... after a bit of research I agree that these functions are not
> a portability hazard.  They are present at least as far back as flex
> 2.5.33 which is as old as we've got in the buildfarm.
>
> However, I'm less excited about them from a performance standpoint.
> The BEGIN() macro expands to (ordinarily)
>
> yyg->yy_start = integer-constant
>
> which is surely pretty cheap.  However, yy_push_state is substantially
> more expensive than that, not least because the first invocation in
> a parse cycle will involve a malloc() or palloc().  Likewise yy_pop_state
> is multiple times more expensive than plain BEGIN().
>
> Now, I agree that this is negligible for ECPG's usage, so if
> pushing/popping state is helpful there, let's go for it.  But I am
> not convinced it's negligible for the backend, and I also don't
> see that we actually need to track any nested scanner states there.
> So I'd rather stick to using BEGIN in the backend.  Not sure about
> psql.

Okay, removed in v11. The advantage of stack functions in ECPG was to
avoid having the two variables state_before_str_start and
state_before_str_stop. But if we don't use stack functions in the
backend, then consistency wins in my mind. Plus, it was easier for me
to revert the stack functions for all 3 scanners.

> BTW, while looking through the latest patch it struck me that
> "UCONST" is an underspecified and potentially confusing name.
> It doesn't indicate what kind of constant we're talking about,
> for instance a C programmer could be forgiven for thinking
> it means something like "123U".  What do you think of "USCONST",
> following UIDENT's lead of prefixing U onto whatever the
> underlying token type is?

Makes perfect sense. Grepping through the source tree, indeed it seems
the replication command scanner is using UCONST for digits.

Some other cosmetic adjustments in ECPG parser.c:
-Previously I had a WIP comment in about 2 functions that are copies
from elsewhere. In v11 I just noted that they are copied.
-I thought it'd be nicer if ECPG spelled UESCAPE in caps when
reconstructing the string.
-Corrected copy-paste-o in comment

Also:
-reverted some spurious whitespace changes
-revised scan.l comment about the performance benefits of no backtracking
-split the ECPG C-comment scanning cleanup into a separate patch, as I
did for v6. I include it here since it's related (merging scanner
states), but not relevant to making the core scanner smaller.
-wrote draft commit messages

--
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v11-0001-Reduce-size-of-backend-scanner-transition-array.patch
Description: Binary data


v11-0002-Merge-ECPG-scanner-states-regarding-C-comments.patch
Description: Binary data


Re: benchmarking Flex practices

2020-01-12 Thread Tom Lane
John Naylor  writes:
>> I no longer use state variables to track scanner state, and in fact I
>> removed the existing "state_before" variable in ECPG. Instead, I used
>> the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().
>> These have been a feature for a long time, it seems, so I think we're
>> okay as far as portability. I think it's cleaner this way, and
>> possibly faster.

Hmm ... after a bit of research I agree that these functions are not
a portability hazard.  They are present at least as far back as flex
2.5.33 which is as old as we've got in the buildfarm.

However, I'm less excited about them from a performance standpoint.
The BEGIN() macro expands to (ordinarily)

yyg->yy_start = integer-constant

which is surely pretty cheap.  However, yy_push_state is substantially
more expensive than that, not least because the first invocation in
a parse cycle will involve a malloc() or palloc().  Likewise yy_pop_state
is multiple times more expensive than plain BEGIN().

Now, I agree that this is negligible for ECPG's usage, so if
pushing/popping state is helpful there, let's go for it.  But I am
not convinced it's negligible for the backend, and I also don't
see that we actually need to track any nested scanner states there.
So I'd rather stick to using BEGIN in the backend.  Not sure about
psql.

BTW, while looking through the latest patch it struck me that
"UCONST" is an underspecified and potentially confusing name.
It doesn't indicate what kind of constant we're talking about,
for instance a C programmer could be forgiven for thinking
it means something like "123U".  What do you think of "USCONST",
following UIDENT's lead of prefixing U onto whatever the
underlying token type is?

regards, tom lane




Re: benchmarking Flex practices

2020-01-02 Thread John Naylor
I wrote:

> I no longer use state variables to track scanner state, and in fact I
> removed the existing "state_before" variable in ECPG. Instead, I used
> the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().
> These have been a feature for a long time, it seems, so I think we're
> okay as far as portability. I think it's cleaner this way, and
> possibly faster.

I thought I should get some actual numbers to test, and the results
are encouraging:

   master   v10
info   1.56s1.51s
str1.18s1.14s
unicode1.33s1.34s
uescape1.44s1.58s


--
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: benchmarking Flex practices

2019-12-03 Thread John Naylor
On Tue, Nov 26, 2019 at 10:32 PM Tom Lane  wrote:

> I haven't looked closely at what ecpg does with the processed
> identifiers.  If it just spits them out as-is, a possible solution
> is to not do anything about de-escaping, but pass the sequence
> U&"..." (plus UESCAPE ... if any), just like that, on to the grammar
> as the value of the IDENT token.

It does pass them along as-is, so I did it that way.

In the attached v10, I've synced both ECPG and psql.

> * I haven't convinced myself either way as to whether it'd be
> better to factor out the code duplicated between the UIDENT
> and UCONST cases in base_yylex.

I chose to factor it out, since we have 2 versions of parser.c, and
this way was much easier to work with.

Some notes:

I arranged for the ECPG grammar to only see SCONST and IDENT. With
UCONST and UIDENT out of the way, it was a small additional step to
put all string reconstruction into the lexer, which has the advantage
of allowing removal of the other special-case ECPG string tokens as
well. The fewer special cases involved in pasting the grammar
together, the better. In doing so, I've probably introduced memory
leaks, but I wanted to get your opinion on the overall approach before
investigating.

In ECPG's parser.c, I simply copied check_uescapechar() and
ecpg_isspace(), but we could find a common place if desired. During
development, I found that this file replicates the location-tracking
logic in the backend, but doesn't seem to make use of it. I also would
have had to replicate the backend's datatype for YYLTYPE. Fixing that
might be worthwhile some day, but to get this working, I just ripped
out the extra location tracking.

I no longer use state variables to track scanner state, and in fact I
removed the existing "state_before" variable in ECPG. Instead, I used
the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state().
These have been a feature for a long time, it seems, so I think we're
okay as far as portability. I think it's cleaner this way, and
possibly faster. I also used this to reunite the xcc and xcsql states.
This whole part could be split out into a separate refactoring patch
to be applied first, if desired.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v10-handle-uescapes-in-parser.patch
Description: Binary data


Re: benchmarking Flex practices

2019-11-26 Thread Tom Lane
John Naylor  writes:
> It seems something is not quite right in v9 with the error position reporting:

>  SELECT U&'wrong: +0061' UESCAPE '+';
>  ERROR:  invalid Unicode escape character at or near "'+'"
>  LINE 1: SELECT U&'wrong: +0061' UESCAPE '+';
> -^
> +   ^

> The caret is not pointing to the third token, or the second for that
> matter.

Interesting.  For me it points at the third token with or without
your fix ... some flex version discrepancy maybe?  Anyway, I have
no objection to your fix; it's probably cleaner than what I had.

>> * I did not do more with ecpg than get it to compile, using the
>> same hacks as in your v7.  It still fails its regression tests,
>> but now the reason is that what we've done in parser/parser.c
>> needs to be transposed into the identical functionality in
>> ecpg/preproc/parser.c.  Or at least some kind of functionality
>> there.  A problem with this approach is that it presumes we can
>> reduce a UIDENT sequence to a plain IDENT, but to do so we need
>> assumptions about the target encoding, and I'm not sure that
>> ecpg should make any such assumptions.  Maybe ecpg should just
>> reject all cases that produce non-ASCII identifiers?  (Probably
>> it could be made to do something smarter with more work, but
>> it's not clear to me that it's worth the trouble.)

> Hmm, I thought we only allowed Unicode escapes in the first place if
> the server encoding was UTF-8. Or did you mean something else?

Well, yeah, but the problem here is that ecpg would have to assume
that the client encoding that its output program will be executed
with is UTF-8.  That seems pretty action-at-a-distance-y.

I haven't looked closely at what ecpg does with the processed
identifiers.  If it just spits them out as-is, a possible solution
is to not do anything about de-escaping, but pass the sequence
U&"..." (plus UESCAPE ... if any), just like that, on to the grammar
as the value of the IDENT token.

BTW, in the back of my mind here is Chapman's point that it'd be
a large step forward in usability if we allowed Unicode escapes
when the backend encoding is *not* UTF-8.  I think I see how to
get there once this patch is done, so I definitely would not like
to introduce some comparable restriction in ecpg.

regards, tom lane




Re: benchmarking Flex practices

2019-11-26 Thread John Naylor
On Tue, Nov 26, 2019 at 5:51 AM Tom Lane  wrote:
>
> [ My apologies for being so slow to get back to this ]

No worries -- it's a nice-to-have, not something our users are excited about.

> It struck me though that there's another solution we haven't discussed,
> and that's to make the token lookahead filter in parser.c do the work
> of converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the
> string case.

I recently tried again to get gram.y to handle it without precedence
hacks (or at least hacks with less mystery) and came to the conclusion
that maybe it just doesn't belong in the grammar after all. I hadn't
thought of any alternatives, so thanks for working on that!

It seems something is not quite right in v9 with the error position reporting:

 SELECT U&'wrong: +0061' UESCAPE '+';
 ERROR:  invalid Unicode escape character at or near "'+'"
 LINE 1: SELECT U&'wrong: +0061' UESCAPE '+';
-^
+   ^

The caret is not pointing to the third token, or the second for that
matter. What worked for me was un-truncating the current token before
calling yylex again. To see if I'm on the right track, I've included
this in the attached, which applies on top of your v9.

> Generally, I'm pretty happy with this approach: it touches gram.y
> hardly at all, and it removes just about all of the complexity from
> scan.l.  I'm happier about dropping the support code into parser.c
> than the other choices we've discussed.

Seems like the best of both worlds. If we ever wanted to ditch the
whole token filter and use Bison's %glr mode, we'd have extra work to
do, but there doesn't seem to be a rush to do so anyway.

> There's still undone work here, though:
>
> * I did not touch psql.  Probably your patch is fine for that.
>
> * I did not do more with ecpg than get it to compile, using the
> same hacks as in your v7.  It still fails its regression tests,
> but now the reason is that what we've done in parser/parser.c
> needs to be transposed into the identical functionality in
> ecpg/preproc/parser.c.  Or at least some kind of functionality
> there.  A problem with this approach is that it presumes we can
> reduce a UIDENT sequence to a plain IDENT, but to do so we need
> assumptions about the target encoding, and I'm not sure that
> ecpg should make any such assumptions.  Maybe ecpg should just
> reject all cases that produce non-ASCII identifiers?  (Probably
> it could be made to do something smarter with more work, but
> it's not clear to me that it's worth the trouble.)

Hmm, I thought we only allowed Unicode escapes in the first place if
the server encoding was UTF-8. Or did you mean something else?

> If this seems like a reasonable approach to you, please fill in
> the missing psql and ecpg bits.

Will do.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v9-addendum-handle-uescapes-in-parser.patch
Description: Binary data


Re: benchmarking Flex practices

2019-11-25 Thread Tom Lane
[ My apologies for being so slow to get back to this ]

John Naylor  writes:
> Now that I think of it, the regression in v7 was largely due to the
> fact that the parser has to call the lexer 3 times per string in this
> case, and that's going to be slower no matter what we do.

Ah, of course.  I'm not too fussed about the performance of queries with
an explicit UESCAPE clause, as that seems like a very minority use-case.
What we do want to pay attention to is not regressing for plain
identifiers/strings, and to a lesser extent the U& cases without UESCAPE.

> Inlining hexval() and friends seems to have helped somewhat for
> unicode escapes, but I'd have to profile to improve that further.
> However, v8 has regressed from v7 enough with both simple strings and
> the information schema that it's a noticeable regression from HEAD.
> I'm guessing getting rid of the "Uescape" production is to blame, but
> I haven't tried reverting just that one piece. Since inlining the
> rules didn't seem to help with the precedence hacks, it seems like the
> separate production was a better way. Thoughts?

I have duplicated your performance tests here, and get more or less
the same results (see below).  I agree that the performance of the
v8 patch isn't really where we want to be --- and it also seems
rather invasive to gram.y, and hence error-prone.  (If we do it
like that, I bet my bottom dollar that somebody would soon commit
a patch that adds a production using IDENT not Ident, and it'd take
a long time to notice.)

It struck me though that there's another solution we haven't discussed,
and that's to make the token lookahead filter in parser.c do the work
of converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the
string case.  I pursued that to the extent of developing the attached
incomplete patch ("v9"), which looks reasonable from a performance
standpoint.  I get these results with tests using the drive_parser
function:

information_schema

HEAD3447.674 ms, 3433.498 ms, 3422.407 ms
v6  3381.851 ms, 3442.478 ms, 3402.629 ms
v7  3525.865 ms, 3441.038 ms, 3473.488 ms
v8  3567.640 ms, 3488.417 ms, 3556.544 ms
v9  3456.360 ms, 3403.635 ms, 3418.787 ms

pgbench str

HEAD4414.046 ms, 4376.222 ms, 4356.468 ms
v6  4304.582 ms, 4245.534 ms, 4263.562 ms
v7  4395.815 ms, 4398.381 ms, 4460.304 ms
v8  4475.706 ms, 4466.665 ms, 4471.048 ms
v9  4392.473 ms, 4316.549 ms, 4318.472 ms

pgbench unicode

HEAD4959.000 ms, 4921.751 ms, 4945.069 ms
v6  4856.998 ms, 4802.996 ms, 4855.486 ms
v7  5057.199 ms, 4948.342 ms, 4956.614 ms
v8  5008.090 ms, 4963.641 ms, 4983.576 ms
v9  4809.227 ms, 4767.355 ms, 4741.641 ms

pgbench uesc

HEAD5114.401 ms, 5235.764 ms, 5200.567 ms
v6  5030.156 ms, 5083.398 ms, 4986.974 ms
v7  5915.508 ms, 5953.135 ms, 5929.775 ms
v8  5678.810 ms, 5665.239 ms, 5645.696 ms
v9  5648.965 ms, 5601.592 ms, 5600.480 ms

(A note about what we're looking at: on my machine, after using cpupower
to lock down the CPU frequency, and taskset to bind everything to one
CPU socket, I can get numbers that are very repeatable, to 0.1% or so
... until I restart the postmaster, and then I get different but equally
repeatable numbers.  The difference can be several percent, which is a lot
of noise compared to what we're looking for.  I believe the explanation is
that kernel ASLR has loaded the backend executable at some different
addresses and so there are different cache-line-boundary effects.  While
I could lock that down too by disabling ASLR, the result would be to
overemphasize chance effects of a particular set of cache line boundaries.
So I prefer to run all the tests over again after restarting the
postmaster, a few times, and then look at the overall set of results to
see what things look like.  Each number quoted above is median-of-three
tests within a single postmaster run.)

Anyway, my conclusion is that the attached patch is at least as fast
as today's HEAD; it's not as fast as v6, but on the other hand it's
an even smaller postmaster executable, so there's something to be said
for that:

$ size postg*
   textdata bss dec hex filename
7478138   57928  203360 7739426  761822 postgres.head
7271218   57928  203360 7532506  72efda postgres.v6
7275810   57928  203360 7537098  7301ca postgres.v7
7276978   57928  203360 7538266  73065a postgres.v8
7266274   57928  203360 7527562  72dc8a postgres.v9

I based this on your v7 not v8; not sure if there's anything you
want to salvage from v8.

Generally, I'm pretty happy with this approach: it touches gram.y
hardly at all, and it removes just about all of the complexity from
scan.l.  I'm happier about dropping the support code into parser.c
than the other choices we've discussed.

There's still undone work here, though:

* I did not touch psql.  Probably your patch is fine for that.

* I did not do more with ecpg than get it to compile, using the
same hacks as in your v7.  It still 

Re: benchmarking Flex practices

2019-09-25 Thread Tom Lane
Alvaro Herrera  writes:
> ... it seems this patch needs attention, but I'm not sure from whom.
> The tests don't pass whenever the server encoding is not UTF8, so I
> suppose we should either have an alternate expected output file to
> account for that, or the tests should be removed.  But anyway the code
> needs to be reviewed.

Yeah, I'm overdue to review it, but other things have taken precedence.

The unportable test is not a problem at this point, since the patch
isn't finished anyway.  I'm not sure yet whether it'd be worth
preserving that test case in the final version.

regards, tom lane




Re: benchmarking Flex practices

2019-09-25 Thread Alvaro Herrera
... it seems this patch needs attention, but I'm not sure from whom.
The tests don't pass whenever the server encoding is not UTF8, so I
suppose we should either have an alternate expected output file to
account for that, or the tests should be removed.  But anyway the code
needs to be reviewed.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: benchmarking Flex practices

2019-08-01 Thread Thomas Munro
On Thu, Aug 1, 2019 at 8:51 PM John Naylor  wrote:
>  select U&'\de04\d83d'; -- surrogates in wrong order
> -psql:test_unicode.sql:10: ERROR:  invalid Unicode surrogate pair at
> or near "U&'\de04\d83d'"
> +psql:test_unicode.sql:10: ERROR:  invalid Unicode surrogate pair
>  LINE 1: select U&'\de04\d83d';
> -   ^
> +  ^
>  select U&'\de04X'; -- orphan low surrogate
> -psql:test_unicode.sql:12: ERROR:  invalid Unicode surrogate pair at
> or near "U&'\de04X'"
> +psql:test_unicode.sql:12: ERROR:  invalid Unicode surrogate pair
>  LINE 1: select U&'\de04X';
> -   ^
> +  ^

While moving this to the September CF, I noticed this failure on Windows:

+ERROR: Unicode escape values cannot be used for code point values
above 007F when the server encoding is not UTF8
LINE 1: SELECT U&'\d83d\d83d';
^

https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.50382

-- 
Thomas Munro
https://enterprisedb.com




Re: benchmarking Flex practices

2019-08-01 Thread John Naylor
On Mon, Jul 29, 2019 at 10:40 PM Tom Lane  wrote:
>
> John Naylor  writes:
>
> > The lexer returns UCONST from xus and UIDENT from xui. The grammar has
> > rules that are effectively:
>
> > SCONST { do nothing}
> > | UCONST { esc char is backslash }
> > | UCONST UESCAPE SCONST { esc char is from $3 }
>
> > ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce
> > conflicts, I added UIDENT to the %nonassoc precedence list to match
> > IDENT, and for UESCAPE I added a %left precedence declaration. Maybe
> > there's a more principled way. I also added an unsigned char type to
> > the %union, but it worked fine on my compiler without it.
>
> I think it might be better to drop the separate "Uescape" production and
> just inline that into the calling rules, exactly per your sketch above.
> You could avoid duplicating the escape-checking logic by moving that into
> the str_udeescape support function.  This would avoid the need for the
> "uchr" union variant, but more importantly it seems likely to be more
> future-proof: IME, any time you can avoid or postpone shift/reduce
> decisions, it's better to do so.
>
> I didn't try, but I think this might allow dropping the %left for
> UESCAPE.  That bothers me because I don't understand why it's
> needed or what precedence level it ought to have.

I tried this, and removing the %left still gives me a shift/reduce
conflict, so I put some effort in narrowing down what's happening. If
I remove the rules with UESCAPE individually, I find that precedence
is not needed for Sconst -- only for Ident. I tried reverting all the
rules to use the original "IDENT" token and one by one changed them to
"Ident", and found 6 places where doing so caused a shift-reduce
conflict:

createdb_opt_name
xmltable_column_option_el
ColId
type_function_name
NonReservedWord
ColLabel

Due to the number of affected places, that didn't seem like a useful
avenue to pursue, so I tried the following:

-Making UESCAPE a reserved keyword or separate token type works, but
other keyword types don't work. Not acceptable, but maybe useful info.
-Giving UESCAPE an %nonassoc precedence above UIDENT works, even if
UIDENT is the lowest in the list. This seems the least intrusive, so I
went with that for v8. One possible downside is that UIDENT now no
longer has the same precedence as IDENT. Not sure if it matters, but
could we fix that contextually with "%prec IDENT"?

> > litbuf_udeescape() and check_uescapechar() were moved to gram.y. The
> > former had be massaged to give error messages similar to HEAD. They're
> > not quite identical, but the position info is preserved. Some of the
> > functions I moved around don't seem to have any test coverage, so I
> > should eventually do some work in that regard.
>
> I don't terribly like the cross-calls you have between gram.y and scan.l
> in this formulation.  If we have to make these functions (hexval() etc)
> non-static anyway, maybe we should shove them all into scansup.c?

I ended up making them static inline in scansup.h since that seemed to
reduce the performance impact (results below). I cribbed some of the
surrogate pair queries from the jsonpath regression tests so we have
some coverage here. Diff'ing from HEAD to patch, the locations are
different for a couple cases (a side effect of the differen error
handling style from scan.l). The patch seems to consistently point at
an escape sequence, so I think it's okay to use that. HEAD, on the
other hand, sometimes points at the start of the whole string:

 select U&'\de04\d83d'; -- surrogates in wrong order
-psql:test_unicode.sql:10: ERROR:  invalid Unicode surrogate pair at
or near "U&'\de04\d83d'"
+psql:test_unicode.sql:10: ERROR:  invalid Unicode surrogate pair
 LINE 1: select U&'\de04\d83d';
-   ^
+  ^
 select U&'\de04X'; -- orphan low surrogate
-psql:test_unicode.sql:12: ERROR:  invalid Unicode surrogate pair at
or near "U&'\de04X'"
+psql:test_unicode.sql:12: ERROR:  invalid Unicode surrogate pair
 LINE 1: select U&'\de04X';
-   ^
+  ^

> > -Performance is very close to v6 with the information_schema and
> > pgbench-like queries with standard strings, which is to say also very
> > close to HEAD. When the latter was changed to use Unicode escapes,
> > however, it was about 15% slower than HEAD. That's a big regression
> > and I haven't tried to pinpoint why.
>
> I don't quite follow what you changed to produce the slower test case?
> But that seems to be something we'd better run to ground before
> deciding whether to go this way.

So "pgbench str" below refers to driving the parser with this set of
queries repeated a couple hundred times in a string:

BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + 'foobarbaz' WHERE
aid = 'foobarbaz';
SELECT abalance FROM pgbench_accounts WHERE aid = 'foobarbaz';
UPDATE pgbench_tellers SET tbalance = tbalance + 'foobarbaz' WHERE tid
= 'foobarbaz';
UPDATE pgbench_branches SET bbalance = bbalance 

Re: benchmarking Flex practices

2019-07-29 Thread Tom Lane
John Naylor  writes:
> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane  wrote:
>> So I'm feeling like maybe we should experiment to see what that
>> solution looks like, before we commit to going in this direction.
>> What do you think?

> Given the above wrinkles, I thought it was worth trying. Attached is a
> rough patch (don't mind the #include mess yet :-) ) that works like
> this:

> The lexer returns UCONST from xus and UIDENT from xui. The grammar has
> rules that are effectively:

> SCONST { do nothing}
> | UCONST { esc char is backslash }
> | UCONST UESCAPE SCONST { esc char is from $3 }

> ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce
> conflicts, I added UIDENT to the %nonassoc precedence list to match
> IDENT, and for UESCAPE I added a %left precedence declaration. Maybe
> there's a more principled way. I also added an unsigned char type to
> the %union, but it worked fine on my compiler without it.

I think it might be better to drop the separate "Uescape" production and
just inline that into the calling rules, exactly per your sketch above.
You could avoid duplicating the escape-checking logic by moving that into
the str_udeescape support function.  This would avoid the need for the
"uchr" union variant, but more importantly it seems likely to be more
future-proof: IME, any time you can avoid or postpone shift/reduce
decisions, it's better to do so.

I didn't try, but I think this might allow dropping the %left for
UESCAPE.  That bothers me because I don't understand why it's
needed or what precedence level it ought to have.

> litbuf_udeescape() and check_uescapechar() were moved to gram.y. The
> former had be massaged to give error messages similar to HEAD. They're
> not quite identical, but the position info is preserved. Some of the
> functions I moved around don't seem to have any test coverage, so I
> should eventually do some work in that regard.

I don't terribly like the cross-calls you have between gram.y and scan.l
in this formulation.  If we have to make these functions (hexval() etc)
non-static anyway, maybe we should shove them all into scansup.c?

> -Binary size is very close to v6. That is to say the grammar tables
> grew by about the same amount the scanner table shrank, so the binary
> is still about  200kB smaller than HEAD.

OK.

> -Performance is very close to v6 with the information_schema and
> pgbench-like queries with standard strings, which is to say also very
> close to HEAD. When the latter was changed to use Unicode escapes,
> however, it was about 15% slower than HEAD. That's a big regression
> and I haven't tried to pinpoint why.

I don't quite follow what you changed to produce the slower test case?
But that seems to be something we'd better run to ground before
deciding whether to go this way.

> -The ecpg changes here are only the bare minimum from HEAD to get it
> to compile, since I'm borrowing its additional token names (although
> they mean slightly different things). After a bit of experimentation,
> it's clear there's a bit more work needed to get it functional, and
> it's not easy to debug, so I'm putting that off until we decide
> whether this is the way forward.

On the whole I like this approach, modulo the performance question.
Let's try to work that out before worrying about ecpg.

regards, tom lane




Re: benchmarking Flex practices

2019-07-24 Thread Tom Lane
Chapman Flack  writes:
> On 07/24/19 03:45, John Naylor wrote:
>> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane  wrote:
>>> However, my second reaction was that maybe you were on to something
>>> upthread when you speculated about postponing de-escaping of
>>> Unicode literals into the grammar.  If we did it like that then

> With the de-escaping postponed, I think we'd be able to move beyond the
> current odd situation where Unicode escapes can't describe non-ascii
> characters, in exactly and only the cases where you need them to.

How so?  The grammar doesn't really have any more context information
than the lexer does.  (In both cases, it would be ugly but not really
invalid for the transformation to depend on the database encoding,
I think.)

regards, tom lane




Re: benchmarking Flex practices

2019-07-24 Thread Chapman Flack
On 07/24/19 03:45, John Naylor wrote:
> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane  wrote:
>> However, my second reaction was that maybe you were on to something
>> upthread when you speculated about postponing de-escaping of
>> Unicode literals into the grammar.  If we did it like that then

Wow, yay. I hadn't been following this thread, but I had just recently
looked over my own earlier musings [1] and started thinking "no, it would
be outlandish to ask the lexer to return utf-8 always ... but what about
postponing the de-escaping of Unicode literals into the grammar?" and
had started to think about when I might have a chance to try making a
patch.

With the de-escaping postponed, I think we'd be able to move beyond the
current odd situation where Unicode escapes can't describe non-ascii
characters, in exactly and only the cases where you need them to.

-Chap


[1]
https://www.postgresql.org/message-id/6688474e-7c28-b352-bcec-ea0ef59d7a1a%40anastigmatix.net




Re: benchmarking Flex practices

2019-07-24 Thread John Naylor
On Sun, Jul 21, 2019 at 3:14 AM Tom Lane  wrote:
>
> John Naylor  writes:
> > The pre-existing ecpg var "state_before" was a bit confusing when
> > combined with the new var "state_before_quote_stop", and the former is
> > also used with C-comments, so I decided to go with
> > "state_before_lit_start" and "state_before_lit_stop". Even though
> > comments aren't literals, it's less of a stretch than referring to
> > quotes. To keep things consistent, I went with the latter var in psql
> > and core.
>
> Hm, what do you think of "state_before_str_stop" instead?  It seems
> to me that both "quote" and "lit" are pretty specific terms, so
> maybe we need something a bit vaguer.

Sounds fine to me.

> While poking at that, I also came across this unhappiness:
>
> regression=# select u&'foo' uescape 'bogus';
> regression'#
>
> that is, psql thinks we're still in a literal at this point.  That's
> because the uesccharfail rule eats "'b" and then we go to INITIAL
> state, so that consuming the last "'" puts us back in a string state.
> The backend would have thrown an error before parsing as far as the
> incomplete literal, so it doesn't care (or probably not, anyway),
> but that's not an option for psql.
>
> My first reaction as to how to fix this was to rip the xuend and
> xuchar states out of psql, and let it just lex UESCAPE as an
> identifier and the escape-character literal like any other literal.
> psql doesn't need to account for the escape character's effect on
> the meaning of the Unicode literal, so it doesn't have any need to
> lex the sequence as one big token.  I think the same is true of ecpg
> though I've not looked really closely.
>
> However, my second reaction was that maybe you were on to something
> upthread when you speculated about postponing de-escaping of
> Unicode literals into the grammar.  If we did it like that then
> we would not need to have this difference between the backend and
> frontend lexers, and we'd not have to worry about what
> psql_scan_in_quote should do about the whitespace before and after
> UESCAPE, either.
>
> So I'm feeling like maybe we should experiment to see what that
> solution looks like, before we commit to going in this direction.
> What do you think?

Given the above wrinkles, I thought it was worth trying. Attached is a
rough patch (don't mind the #include mess yet :-) ) that works like
this:

The lexer returns UCONST from xus and UIDENT from xui. The grammar has
rules that are effectively:

SCONST { do nothing}
| UCONST { esc char is backslash }
| UCONST UESCAPE SCONST { esc char is from $3 }

...where UESCAPE is now an unreserved keyword. To prevent shift-reduce
conflicts, I added UIDENT to the %nonassoc precedence list to match
IDENT, and for UESCAPE I added a %left precedence declaration. Maybe
there's a more principled way. I also added an unsigned char type to
the %union, but it worked fine on my compiler without it.

litbuf_udeescape() and check_uescapechar() were moved to gram.y. The
former had be massaged to give error messages similar to HEAD. They're
not quite identical, but the position info is preserved. Some of the
functions I moved around don't seem to have any test coverage, so I
should eventually do some work in that regard.

Notes:

-Binary size is very close to v6. That is to say the grammar tables
grew by about the same amount the scanner table shrank, so the binary
is still about  200kB smaller than HEAD.
-Performance is very close to v6 with the information_schema and
pgbench-like queries with standard strings, which is to say also very
close to HEAD. When the latter was changed to use Unicode escapes,
however, it was about 15% slower than HEAD. That's a big regression
and I haven't tried to pinpoint why.
-psql was changed to follow suit. It doesn't think it's inside a
string with your too-long escape char above, and it removes all blank
lines from this query output:

$ cat >> test-uesc-lit.sql
SELECT

u&'!0041'

uescape

'!'

as col
;


On HEAD and v6 I get this:

$ ./inst/bin/psql -a -f test-uesc-lit.sql

SELECT
u&'!0041'

uescape
'!'
as col
;
 col
-
 A
(1 row)


-The ecpg changes here are only the bare minimum from HEAD to get it
to compile, since I'm borrowing its additional token names (although
they mean slightly different things). After a bit of experimentation,
it's clear there's a bit more work needed to get it functional, and
it's not easy to debug, so I'm putting that off until we decide
whether this is the way forward.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v7-draft-handle-uescapes-in-parser.patch
Description: Binary data


Re: benchmarking Flex practices

2019-07-20 Thread Tom Lane
John Naylor  writes:
> The pre-existing ecpg var "state_before" was a bit confusing when
> combined with the new var "state_before_quote_stop", and the former is
> also used with C-comments, so I decided to go with
> "state_before_lit_start" and "state_before_lit_stop". Even though
> comments aren't literals, it's less of a stretch than referring to
> quotes. To keep things consistent, I went with the latter var in psql
> and core.

Hm, what do you think of "state_before_str_stop" instead?  It seems
to me that both "quote" and "lit" are pretty specific terms, so
maybe we need something a bit vaguer.

> To get the regression tests to pass, I had to add this:
>  psql_scan_in_quote(PsqlScanState state)
>  {
> - return state->start_state != INITIAL;
> + return state->start_state != INITIAL &&
> + state->start_state != xqs;
>  }
> ...otherwise with parens we sometimes don't get the right prompt and
> we get empty lines echoed. Adding xuend and xuchar here didn't seem to
> make a difference. There might be something subtle I'm missing, so I
> thought I'd mention it.

I think you would see a difference if the regression tests had any cases
with blank lines between a Unicode string/ident and the associated
UESCAPE and escape-character literal.

While poking at that, I also came across this unhappiness:

regression=# select u&'foo' uescape 'bogus';
regression'# 

that is, psql thinks we're still in a literal at this point.  That's
because the uesccharfail rule eats "'b" and then we go to INITIAL
state, so that consuming the last "'" puts us back in a string state.
The backend would have thrown an error before parsing as far as the
incomplete literal, so it doesn't care (or probably not, anyway),
but that's not an option for psql.

My first reaction as to how to fix this was to rip the xuend and
xuchar states out of psql, and let it just lex UESCAPE as an
identifier and the escape-character literal like any other literal.
psql doesn't need to account for the escape character's effect on
the meaning of the Unicode literal, so it doesn't have any need to
lex the sequence as one big token.  I think the same is true of ecpg
though I've not looked really closely.

However, my second reaction was that maybe you were on to something
upthread when you speculated about postponing de-escaping of
Unicode literals into the grammar.  If we did it like that then
we would not need to have this difference between the backend and
frontend lexers, and we'd not have to worry about what
psql_scan_in_quote should do about the whitespace before and after
UESCAPE, either.

So I'm feeling like maybe we should experiment to see what that
solution looks like, before we commit to going in this direction.
What do you think?


> With the unicode escape rules brought over, the diff to the ecpg
> scanner is much cleaner now. The diff for C-comment rules were still
> pretty messy in comparison, so I made an attempt to clean that up in
> 0002. A bit off-topic, but I thought I should offer that while it was
> fresh in my head.

I didn't really review this, but it looked like a fairly plausible
change of the same ilk, ie combine rules by adding memory of the
previous start state.

regards, tom lane




Re: benchmarking Flex practices

2019-07-12 Thread John Naylor
On Wed, Jul 10, 2019 at 3:15 AM Tom Lane  wrote:
>
> John Naylor  writes:
> > [ v4 patches for trimming lexer table size ]
>
> I reviewed this and it looks pretty solid.  One gripe I have is
> that I think it's best to limit backup-prevention tokens such as
> quotecontinuefail so that they match only exact prefixes of their
> "success" tokens.  This seems clearer to me, and in at least some cases
> it can save a few flex states.  The attached v5 patch does it like that
> and gets us down to 22331 states (from 23696).  In some places it looks
> like you did that to avoid writing an explicit "{other}" match rule for
> an exclusive state, but I think it's better for readability and
> separation of concerns to go ahead and have those explicit rules
> (and it seems to make no difference table-size-wise).

Looks good to me.

> We still need to propagate these changes into the psql and ecpg lexers,
> but I assume you were waiting to agree on the core patch before touching
> those.  If you're good with the changes I made here, have at it.

I just made a couple additional cosmetic adjustments that made sense
when diff'ing with the other scanners. Make check-world passes. Some
notes:

The pre-existing ecpg var "state_before" was a bit confusing when
combined with the new var "state_before_quote_stop", and the former is
also used with C-comments, so I decided to go with
"state_before_lit_start" and "state_before_lit_stop". Even though
comments aren't literals, it's less of a stretch than referring to
quotes. To keep things consistent, I went with the latter var in psql
and core.

To get the regression tests to pass, I had to add this:

 psql_scan_in_quote(PsqlScanState state)
 {
- return state->start_state != INITIAL;
+ return state->start_state != INITIAL &&
+ state->start_state != xqs;
 }

...otherwise with parens we sometimes don't get the right prompt and
we get empty lines echoed. Adding xuend and xuchar here didn't seem to
make a difference. There might be something subtle I'm missing, so I
thought I'd mention it.

With the unicode escape rules brought over, the diff to the ecpg
scanner is much cleaner now. The diff for C-comment rules were still
pretty messy in comparison, so I made an attempt to clean that up in
0002. A bit off-topic, but I thought I should offer that while it was
fresh in my head.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v6-0001-Reduce-the-number-of-states-in-the-core-scanner-t.patch
Description: Binary data


v6-0002-Merge-ECPG-scanner-states-for-C-style-comments.patch
Description: Binary data


Re: benchmarking Flex practices

2019-07-09 Thread Tom Lane
John Naylor  writes:
> [ v4 patches for trimming lexer table size ]

I reviewed this and it looks pretty solid.  One gripe I have is
that I think it's best to limit backup-prevention tokens such as
quotecontinuefail so that they match only exact prefixes of their
"success" tokens.  This seems clearer to me, and in at least some cases
it can save a few flex states.  The attached v5 patch does it like that
and gets us down to 22331 states (from 23696).  In some places it looks
like you did that to avoid writing an explicit "{other}" match rule for
an exclusive state, but I think it's better for readability and
separation of concerns to go ahead and have those explicit rules
(and it seems to make no difference table-size-wise).

I also made some cosmetic changes (mostly improving comments) and
smashed the patch series down to 1 patch, because I preferred to
review it that way and we're not really going to commit these
separately.

I did a little bit of portability testing, to the extent of verifying
that the oldest and newest Flex versions I have handy (2.5.33 and 2.6.4)
agree on the table size change and get through regression tests.  So
I think we should be good from that end.

We still need to propagate these changes into the psql and ecpg lexers,
but I assume you were waiting to agree on the core patch before touching
those.  If you're good with the changes I made here, have at it.

regards, tom lane

diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l
index e1cae85..899da09 100644
--- a/src/backend/parser/scan.l
+++ b/src/backend/parser/scan.l
@@ -168,12 +168,14 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner);
  *   delimited identifiers (double-quoted identifiers)
  *   hexadecimal numeric string
  *   standard quoted strings
+ *   quote stop (detect continued strings)
  *   extended quoted strings (support backslash escape sequences)
  *   $foo$ quoted strings
  *   quoted identifier with Unicode escapes
- *   end of a quoted identifier with Unicode escapes, UESCAPE can follow
  *   quoted string with Unicode escapes
- *   end of a quoted string with Unicode escapes, UESCAPE can follow
+ *   end of a quoted string or identifier with Unicode escapes,
+ *UESCAPE can follow
+ *   expecting escape character literal after UESCAPE
  *   Unicode surrogate pair in extended quoted string
  *
  * Remember to add an <> case whenever you add a new exclusive state!
@@ -185,12 +187,13 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner);
 %x xd
 %x xh
 %x xq
+%x xqs
 %x xe
 %x xdolq
 %x xui
-%x xuiend
 %x xus
-%x xusend
+%x xuend
+%x xuchar
 %x xeu
 
 /*
@@ -231,19 +234,18 @@ special_whitespace		({space}+|{comment}{newline})
 horiz_whitespace		({horiz_space}|{comment})
 whitespace_with_newline	({horiz_whitespace}*{newline}{special_whitespace}*)
 
+quote			'
+/* If we see {quote} then {quotecontinue}, the quoted string continues */
+quotecontinue	{whitespace_with_newline}{quote}
+
 /*
- * To ensure that {quotecontinue} can be scanned without having to back up
- * if the full pattern isn't matched, we include trailing whitespace in
- * {quotestop}.  This matches all cases where {quotecontinue} fails to match,
- * except for {quote} followed by whitespace and just one "-" (not two,
- * which would start a {comment}).  To cover that we have {quotefail}.
- * The actions for {quotestop} and {quotefail} must throw back characters
- * beyond the quote proper.
+ * {quotecontinuefail} is needed to avoid lexer backup when we fail to match
+ * {quotecontinue}.  It might seem that this could just be {whitespace}*,
+ * but if there's a dash after {whitespace_with_newline}, it must be consumed
+ * to see if there's another dash --- which would start a {comment} and thus
+ * allow continuation of the {quotecontinue} token.
  */
-quote			'
-quotestop		{quote}{whitespace}*
-quotecontinue	{quote}{whitespace_with_newline}{quote}
-quotefail		{quote}{whitespace}*"-"
+quotecontinuefail	{whitespace}*"-"?
 
 /* Bit string
  * It is tempting to scan the string for only those characters
@@ -304,10 +306,15 @@ xdstop			{dquote}
 xddouble		{dquote}{dquote}
 xdinside		[^"]+
 
-/* Unicode escapes */
-uescape			[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']{quote}
+/* Optional UESCAPE after a quoted string or identifier with Unicode escapes */
+uescape			[uU][eE][sS][cC][aA][pP][eE]
+/* error rule to avoid backup */
+uescapefail		[uU][eE][sS][cC][aA][pP]|[uU][eE][sS][cC][aA]|[uU][eE][sS][cC]|[uU][eE][sS]|[uU][eE]|[uU]
+
+/* escape character literal */
+uescchar		{quote}[^']{quote}
 /* error rule to avoid backup */
-uescapefail		[uU][eE][sS][cC][aA][pP][eE]{whitespace}*"-"|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*|[uU][eE][sS][cC][aA][pP]|[uU][eE][sS][cC][aA]|[uU][eE][sS][cC]|[uU][eE][sS]|[uU][eE]|[uU]
+uesccharfail	{quote}[^']|{quote}
 
 /* Quoted identifier with 

Re: benchmarking Flex practices

2019-07-05 Thread John Naylor
On Wed, Jul 3, 2019 at 5:35 AM Tom Lane  wrote:
>
> As far as I can see, the point of 0002 is to have just one set of
> flex rules for the various variants of quotecontinue processing.
> That sounds OK, though I'm a bit surprised it makes this much difference
> in the table size. I would suggest that "state_before" needs a less
> generic name (maybe "state_before_xqs"?) and more than no comment.
> Possibly more to the point, it's not okay to have static state variables
> in the core scanner, so that variable needs to be kept in yyextra.

v4-0001 is basically the same as v3-0002, with the state variable in
yyextra. Since follow-on patches use it as well, I've named it
state_before_quote_stop. I failed to come up with a nicer short name.
With this applied, the transition table is reduced from 37045 to
30367. Since that's uncomfortably close to the 32k limit for 16 bit
members, I hacked away further at UESCAPE bloat.

0002 unifies xusend and xuiend by saving the state of xui as well.
This actually causes a performance regression, but it's more of a
refactoring patch to prevent from having to create two additional
start conditions in 0003 (of course it could be done that way if
desired, but the savings won't be as great). In any case, the table is
now down to 26074.

0003 creates a separate start condition so that UESCAPE and the
expected quoted character after it are detected in separate states.
This allows us to use standard whitespace skipping techniques and also
to greatly simplify the uescapefail rule. The final size of the table
is 23696. Removing UESCAPE entirely results in 21860, so this likely
the most compact size of this feature.

Performance is very similar to HEAD. Parsing the information schema
might be a hair faster and pgbench-like queries with simple strings a
hair slower, but the difference seems within the noise of variation.
Parsing strings with UESCAPE likewise seems about the same.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v4-0001-Replace-the-Flex-quotestop-rules-with-a-new-exclu.patch
Description: Binary data


v4-0002-Unify-xuiend-and-xusend-into-a-single-start-condi.patch
Description: Binary data


v4-0003-Use-separate-start-conditions-for-both-UESCAPE-an.patch
Description: Binary data


Re: benchmarking Flex practices

2019-07-03 Thread John Naylor
On Wed, Jul 3, 2019 at 5:35 AM Tom Lane  wrote:
>
> John Naylor  writes:
> > 0001 is a small patch to remove some unneeded generality from the
> > current rules. This lowers the number of elements in the yy_transition
> > array from 37045 to 36201.
>
> I don't particularly like 0001.  The two bits like this
>
> -whitespace ({space}+|{comment})
> +whitespace ({space}|{comment})
>
> seem likely to create performance problems for runs of whitespace, in that
> the lexer will now have to execute the associated action once per space
> character not just once for the whole run.

Okay.

> There are a bunch of higher-order productions that use "{whitespace}*",
> which is surely a bit redundant given the contents of {whitespace}.
> But maybe we could address that by replacing "{whitespace}*" with
> "{opt_whitespace}" defined as
>
> opt_whitespace  ({space}*|{comment})
>
> Not sure what impact if any that'd have on table size, but I'm quite sure
> that {whitespace} was defined with an eye to avoiding unnecessary
> lexer action cycles.

It turns out that {opt_whitespace} as defined above is not equivalent
to {whitespace}* , since the former is either a single comment or a
single run of 0 or more whitespace chars (if I understand correctly).
Using {opt_whitespace} for the UESCAPE rules on top of v3-0002, the
regression tests pass, but queries like this fail with a syntax error:

# select U&'d!0061t!+61' uescape  --comment
'!';

There was in fact a substantial size reduction, though, so for
curiosity's sake I tried just replacing {whitespace}* with {space}* in
the UESCAPE rules, and the table shrank from 30367 (that's with 0002
only) to 24661.

> As for the other two bits that are like
>
> -.  {
> -   /* This is only needed for \ just 
> before EOF */
> +\\ {
>
> my recollection is that those productions are defined that way to avoid a
> flex warning about not all possible input characters being accounted for
> in the  (resp. ) state.  Maybe that warning is
> flex-version-dependent, or maybe this was just a worry and not something
> that actually produced a warning ... but I'm hesitant to change it.
> If we ever did get to flex's default action, that action is to echo the
> current input character to stdout, which would be Very Bad.

FWIW, I tried Flex 2.5.35 and 2.6.4 with no warnings, and I did get a
warning when I deleted any of those two rules. I'll leave them out for
now, since this change was only good for ~500 fewer elements in the
transition array.

> As far as I can see, the point of 0002 is to have just one set of
> flex rules for the various variants of quotecontinue processing.
> That sounds OK, though I'm a bit surprised it makes this much difference
> in the table size. I would suggest that "state_before" needs a less
> generic name (maybe "state_before_xqs"?) and more than no comment.
> Possibly more to the point, it's not okay to have static state variables
> in the core scanner, so that variable needs to be kept in yyextra.
> (Don't remember offhand whether it's any more acceptable in the other
> scanners.)

Ah yes, I got this idea from the ECPG scanner, which is not reentrant. Will fix.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: benchmarking Flex practices

2019-07-02 Thread Tom Lane
John Naylor  writes:
> 0001 is a small patch to remove some unneeded generality from the
> current rules. This lowers the number of elements in the yy_transition
> array from 37045 to 36201.

I don't particularly like 0001.  The two bits like this

-whitespace ({space}+|{comment})
+whitespace ({space}|{comment})

seem likely to create performance problems for runs of whitespace, in that
the lexer will now have to execute the associated action once per space
character not just once for the whole run.  Those actions are empty, but
I don't think flex optimizes for that, and it's really flex's per-action
overhead that I'm worried about.  Note the comment in the "Performance"
section of the flex manual:

Another area where the user can increase a scanner's performance (and
one that's easier to implement) arises from the fact that the longer
the tokens matched, the faster the scanner will run.  This is because
with long tokens the processing of most input characters takes place
in the (short) inner scanning loop, and does not often have to go
through the additional work of setting up the scanning environment
(e.g., `yytext') for the action.

There are a bunch of higher-order productions that use "{whitespace}*",
which is surely a bit redundant given the contents of {whitespace}.
But maybe we could address that by replacing "{whitespace}*" with
"{opt_whitespace}" defined as

opt_whitespace  ({space}*|{comment})

Not sure what impact if any that'd have on table size, but I'm quite sure
that {whitespace} was defined with an eye to avoiding unnecessary
lexer action cycles.

As for the other two bits that are like

-.  {
-   /* This is only needed for \ just 
before EOF */
+\\ {

my recollection is that those productions are defined that way to avoid a
flex warning about not all possible input characters being accounted for
in the  (resp. ) state.  Maybe that warning is
flex-version-dependent, or maybe this was just a worry and not something
that actually produced a warning ... but I'm hesitant to change it.
If we ever did get to flex's default action, that action is to echo the
current input character to stdout, which would be Very Bad.

As far as I can see, the point of 0002 is to have just one set of
flex rules for the various variants of quotecontinue processing.
That sounds OK, though I'm a bit surprised it makes this much difference
in the table size. I would suggest that "state_before" needs a less
generic name (maybe "state_before_xqs"?) and more than no comment.
Possibly more to the point, it's not okay to have static state variables
in the core scanner, so that variable needs to be kept in yyextra.
(Don't remember offhand whether it's any more acceptable in the other
scanners.)

regards, tom lane




Re: benchmarking Flex practices

2019-06-27 Thread John Naylor
I wrote:

> > I found a possible other way to bring the size of the transition table
> > under 32k entries while keeping the existing no-backup rules in place:
> > Replace the "quotecontinue" rule with a new state. In the attached
> > draft patch, when Flex encounters a quote while inside any kind of
> > quoted string, it saves the current state and enters %xqs (think
> > 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it
> > reenters the previous state and continues to slurp the string,
> > otherwise, it throws back everything and returns the string it just
> > exited. Doing it this way is a bit uglier, but with some extra
> > commentary it might not be too bad.
>
> I had an epiphany and managed to get rid of the backup states.
> Regression tests pass. The array is down to 30367 entries and the
> binary is smaller by 172kB on Linux x86-64. Performance is identical
> to master on both tests mentioned upthread. I'll clean this up and add
> it to the commitfest.

For the commitfest:

0001 is a small patch to remove some unneeded generality from the
current rules. This lowers the number of elements in the yy_transition
array from 37045 to 36201.

0002 is a cleaned up version of the above, bring the size down to 29521.

I haven't changed psqlscan.l or pgc.l, in case this approach is
changed or rejected

With the two together, the binary is about 175kB smaller than on HEAD.

I also couldn't resist playing around with the idea upthread to handle
unicode escapes in parser.c, which further reduces the number of
states down to 21068, which allows some headroom for future additions
without going back to 32-bit types in the transition array. It mostly
works, but it's quite ugly and breaks the token position handling for
unicode escape syntax errors, so it's not in a state to share.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v3-0001-Remove-some-unneeded-generality-from-the-core-Fle.patch
Description: Binary data


v3-0002-Replace-the-Flex-quotestop-rules-with-a-new-exclu.patch
Description: Binary data


Re: benchmarking Flex practices

2019-06-24 Thread John Naylor
I wrote:

> > I'll look for other rules that could be more
> > easily optimized, but I'm not terribly optimistic.
>
> I found a possible other way to bring the size of the transition table
> under 32k entries while keeping the existing no-backup rules in place:
> Replace the "quotecontinue" rule with a new state. In the attached
> draft patch, when Flex encounters a quote while inside any kind of
> quoted string, it saves the current state and enters %xqs (think
> 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it
> reenters the previous state and continues to slurp the string,
> otherwise, it throws back everything and returns the string it just
> exited. Doing it this way is a bit uglier, but with some extra
> commentary it might not be too bad.

I had an epiphany and managed to get rid of the backup states.
Regression tests pass. The array is down to 30367 entries and the
binary is smaller by 172kB on Linux x86-64. Performance is identical
to master on both tests mentioned upthread. I'll clean this up and add
it to the commitfest.

--
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l
index e1cae859e8..67ad06da4f 100644
--- a/src/backend/parser/scan.l
+++ b/src/backend/parser/scan.l
@@ -56,6 +56,8 @@ fprintf_to_ereport(const char *fmt, const char *msg)
 	ereport(ERROR, (errmsg_internal("%s", msg)));
 }
 
+static int state_before;
+
 /*
  * GUC variables.  This is a DIRECT violation of the warning given at the
  * head of gram.y, ie flex/bison code must not depend on any GUC variables;
@@ -168,6 +170,7 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner);
  *   delimited identifiers (double-quoted identifiers)
  *   hexadecimal numeric string
  *   standard quoted strings
+ *   quote stop (detect continued strings)
  *   extended quoted strings (support backslash escape sequences)
  *   $foo$ quoted strings
  *   quoted identifier with Unicode escapes
@@ -185,6 +188,7 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner);
 %x xd
 %x xh
 %x xq
+%x xqs
 %x xe
 %x xdolq
 %x xui
@@ -231,19 +235,7 @@ special_whitespace		({space}+|{comment}{newline})
 horiz_whitespace		({horiz_space}|{comment})
 whitespace_with_newline	({horiz_whitespace}*{newline}{special_whitespace}*)
 
-/*
- * To ensure that {quotecontinue} can be scanned without having to back up
- * if the full pattern isn't matched, we include trailing whitespace in
- * {quotestop}.  This matches all cases where {quotecontinue} fails to match,
- * except for {quote} followed by whitespace and just one "-" (not two,
- * which would start a {comment}).  To cover that we have {quotefail}.
- * The actions for {quotestop} and {quotefail} must throw back characters
- * beyond the quote proper.
- */
 quote			'
-quotestop		{quote}{whitespace}*
-quotecontinue	{quote}{whitespace_with_newline}{quote}
-quotefail		{quote}{whitespace}*"-"
 
 /* Bit string
  * It is tempting to scan the string for only those characters
@@ -476,21 +468,10 @@ other			.
 	startlit();
 	addlitchar('b', yyscanner);
 }
-{quotestop}	|
-{quotefail} {
-	yyless(1);
-	BEGIN(INITIAL);
-	yylval->str = litbufdup(yyscanner);
-	return BCONST;
-}
 {xhinside}	|
 {xbinside}	{
 	addlit(yytext, yyleng, yyscanner);
 }
-{quotecontinue}	|
-{quotecontinue}	{
-	/* ignore */
-}
 <>		{ yyerror("unterminated bit string literal"); }
 
 {xhstart}		{
@@ -505,13 +486,6 @@ other			.
 	startlit();
 	addlitchar('x', yyscanner);
 }
-{quotestop}	|
-{quotefail} {
-	yyless(1);
-	BEGIN(INITIAL);
-	yylval->str = litbufdup(yyscanner);
-	return XCONST;
-}
 <>		{ yyerror("unterminated hexadecimal string literal"); }
 
 {xnstart}		{
@@ -568,28 +542,65 @@ other			.
 	BEGIN(xus);
 	startlit();
 }
-{quotestop}	|
-{quotefail} {
-	yyless(1);
-	BEGIN(INITIAL);
+
+{quote} {
+	state_before = YYSTATE;
+	BEGIN(xqs);
+}
+{whitespace_with_newline}{quote} {
+	/* resume scanning string that started on a previous line */
+	BEGIN(state_before);
+}
+{quote} {
 	/*
-	 * check that the data remains valid if it might have been
-	 * made invalid by unescaping any chars.
+	 * SQL requires at least one newline in the whitespace separating
+	 * string literals that are to be concatenated, so throw an error
+	 * if we see the start of a new string on the same line.
 	 */
-	if (yyextra->saw_non_ascii)
-		pg_verifymbstr(yyextra->literalbuf,
-	   yyextra->literallen,
-	   false);
-	yylval->str = litbufdup(yyscanner);
-	return SCONST;
+	SET_YYLLOC();
+	ADVANCE_YYLLOC(yyleng - 1);
+	yyerror("syntax error");
 }
-{quotestop} |
-{quotefail} {
-	/* throw back all but the quote */
-	yyless(1);
-	/* xusend state looks for possible UESCAPE */
-	BEGIN(xusend);

Re: benchmarking Flex practices

2019-06-24 Thread John Naylor
I wrote:

> I'll look for other rules that could be more
> easily optimized, but I'm not terribly optimistic.

I found a possible other way to bring the size of the transition table
under 32k entries while keeping the existing no-backup rules in place:
Replace the "quotecontinue" rule with a new state. In the attached
draft patch, when Flex encounters a quote while inside any kind of
quoted string, it saves the current state and enters %xqs (think
'quotestop'). If it then sees {whitespace_with_newline}{quote}, it
reenters the previous state and continues to slurp the string,
otherwise, it throws back everything and returns the string it just
exited. Doing it this way is a bit uglier, but with some extra
commentary it might not be too bad.

The array is now 30883 entries. That's still a bit close for comfort,
but shrinks the binary by 171kB on Linux x86-64 with Flex 2.6.4. The
bad news is I have these baffling backup states in my new rules:

State #133 is non-accepting -
 associated rule line numbers:
551 554 564
 out-transitions: [ \000-\377 ]
 jam-transitions: EOF []

State #162 is non-accepting -
 associated rule line numbers:
551 554 564
 out-transitions: [ \000-\377 ]
 jam-transitions: EOF []

2 backing up (non-accepting) states.

I already explicitly handle EOF, so I don't know what it's trying to
tell me. If it can be fixed while keeping the array size, I'll do
performance tests.

--
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v1-lexer-redo-quote-continuation.patch
Description: Binary data


Re: benchmarking Flex practices

2019-06-20 Thread Andres Freund
Hi,

On 2019-06-20 10:52:54 -0400, Tom Lane wrote:
> John Naylor  writes:
> > It would be nice to have confirmation to make sure I didn't err
> > somewhere, and to try a more real-world benchmark.
> 
> I don't see much wrong with using information_schema.sql as a parser/lexer
> benchmark case.  We should try to confirm the results on other platforms
> though.

Might be worth also testing with a more repetitive testcase to measure
both cache locality and branch prediction. I assume that with
information_schema there's enough variability that these effects play a
smaller role. And there's plenty real-world cases where there's a *lot*
of very similar statements being parsed over and over. I'd probably just
measure the statements pgbench generates or such.

Greetings,

Andres Freund




Re: benchmarking Flex practices

2019-06-20 Thread Tom Lane
John Naylor  writes:
> I decided to do some experiments with how we use Flex. The main
> takeaway is that backtracking, which we removed in 2005, doesn't seem
> to matter anymore for the core scanner. Also, state table size is of
> marginal importance.

Huh.  That's really interesting, because removing backtracking was a
demonstrable, significant win when we did it [1].  I wonder what has
changed?  I'd be prepared to believe that today's machines are more
sensitive to the amount of cache space eaten by the tables --- but that
idea seems contradicted by your result that the table size isn't
important.  (I'm wishing I'd documented the test case I used in 2005...)

> The size difference is because the size of the elements of the
> yy_transition array is constrained by the number of elements in the
> array. Since there are now fewer than INT16_MAX state transitions, the
> struct members go from 32 bit:
> static yyconst struct yy_trans_info yy_transition[37045] = ...
> to 16 bit:
> static yyconst struct yy_trans_info yy_transition[31763] = ...

Hm.  Smaller binary is definitely nice, but 31763 is close enough to
32768 that I'd have little faith in the optimization surviving for long.
Is there any way we could buy back some more transitions?

> It would be nice to have confirmation to make sure I didn't err
> somewhere, and to try a more real-world benchmark.

I don't see much wrong with using information_schema.sql as a parser/lexer
benchmark case.  We should try to confirm the results on other platforms
though.

regards, tom lane

[1] https://www.postgresql.org/message-id/8652.1116865...@sss.pgh.pa.us




benchmarking Flex practices

2019-06-20 Thread John Naylor
I decided to do some experiments with how we use Flex. The main
takeaway is that backtracking, which we removed in 2005, doesn't seem
to matter anymore for the core scanner. Also, state table size is of
marginal importance.

Using the information_schema Flex+Bison microbenchmark from Tom [1], I
tested removing most of the "fail" rules designed to avoid
backtracking ("decimalfail" is needed by PL/pgSQL). Below are the best
times (most runs within 1%), followed by postgres binary size. The
numbers are with Flex 2.5.35 on MacOS, no asserts or debugging
symbols.

HEAD:
1.53s
7139132 bytes

HEAD minus "fail" rules (patch attached):
1.53s
6971204 bytes

Surprisingly, it has the same performance and a much smaller binary.
The size difference is because the size of the elements of the
yy_transition array is constrained by the number of elements in the
array. Since there are now fewer than INT16_MAX state transitions, the
struct members go from 32 bit:

struct yy_trans_info
{
flex_int32_t yy_verify;
flex_int32_t yy_nxt;
};
static yyconst struct yy_trans_info yy_transition[37045] = ...

to 16 bit:

struct yy_trans_info
{
flex_int16_t yy_verify;
flex_int16_t yy_nxt;
};
static yyconst struct yy_trans_info yy_transition[31763] = ...

To test if array size was the deciding factor, I tried bloating it by
essentially undoing commit a5ff502fcea. Doing so produced an array
with 62583 elements and 32-bit members, so nearly quadruple in size,
and it was still not much slower than HEAD:

HEAD minus "fail" rules, minus %xusend/%xuiend:
1.56s
7343932 bytes

While at it, I repeated the benchmark with different Flex flags:

HEAD, plus -Cf:
1.60s
6995788 bytes

HEAD, minus "fail" rules, plus -Cf:
1.59s
6979396 bytes

HEAD, plus -Cfe:
1.65s
6868804 bytes

So this recommendation of the Flex manual (-CF) still holds true. It's
worth noting that using perfect hashing for keyword lookup (20%
faster) had a much bigger effect than switching from -Cfe to -CF (7%
faster).

It would be nice to have confirmation to make sure I didn't err
somewhere, and to try a more real-world benchmark. (Also for the
moment I only have Linux on a virtual machine.) The regression tests
pass, but some comments are now wrong. If it's confirmed that
backtracking doesn't matter for recent Flex/hardware, disregarding it
would make maintenance of our scanners a bit easier.

[1] https://www.postgresql.org/message-id/14616.1558560331%40sss.pgh.pa.us

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


remove-scanner-fail-rules.patch
Description: Binary data