Peter Eisentraut <[EMAIL PROTECTED]> writes:
> My profiles show that the work spent in the scanner is really minuscule
> compared to everything else.

Under ordinary circumstances I think that's true ...

> (The profile data is from a run of all the regression test files in order
> in one session.)

The regression tests contain no very-long literals.  The results I was
referring to concerned cases with string (BLOB) literals in the
hundreds-of-K range; it seems that the per-character loop in the flex
lexer starts to look like a bottleneck when you have tokens that much
larger than the rest of the query.

Solutions seem to be either (a) make that loop quicker, or (b) find a
way to avoid passing BLOBs through the lexer.  I was merely suggesting
that (a) should be investigated before we invest the work implied
by (b).

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to