On 21.06.2018 17:56, Robbie Harwood wrote:
Konstantin Knizhnik <k.knizh...@postgrespro.ru> writes:

On 20.06.2018 23:34, Robbie Harwood wrote:
Konstantin Knizhnik <k.knizh...@postgrespro.ru> writes:


My idea was the following: client want to use compression. But server
may reject this attempt (for any reasons: it doesn't support it, has
no proper compression library, do not want to spend CPU for
decompression,...) Right now compression algorithm is hardcoded. But
in future client and server may negotiate to choose proper compression
protocol.  This is why I prefer to perform negotiation between client
and server to enable compression.
Well, for negotiation you could put the name of the algorithm you want
in the startup.  It doesn't have to be a boolean for compression, and
then you don't need an additional round-trip.
Sorry, I can only repeat arguments I already mentioned:
- in future it may be possible to specify compression algorithm
- even with boolean compression option server may have some reasons to
reject client's request to use compression

Extra flexibility is always good thing if it doesn't cost too much. And
extra round of negotiation in case of enabling compression seems to me
not to be a high price for it.
You already have this flexibility even without negotiation.  I don't
want you to lose your flexibility.  Protocol looks like this:

- Client sends connection option "compression" with list of algorithms
   it wants to use (comma-separated, or something).

- First packet that the server can compress one of those algorithms (or
   none, if it doesn't want to turn on compression).

No additional round-trips needed.

This is exactly how it works now...
Client includes compression option in connection string and server replies with special message ('Z') if it accepts request to compress traffic between this client and server. I do not know whether sending such message can be considered as "round-trip" or not, but I do not want to loose possibility to make decision at server whether to use compression or not.


Well, that's a design decision you've made.  You could put lengths on
chunks that are sent out - then you'd know exactly how much is needed.
(For instance, 4 bytes of network-order length followed by a complete
payload.)  Then you'd absolutely know whether you have enough to
decompress or not.
Do you really suggest to send extra header for each chunk of data?
Please notice that chunk can be as small as one message: dozen of bytes
because libpq is used for client-server communication with request-reply
pattern.
I want you to think critically about your design.  I *really* don't want
to design it for you - I have enough stuff to be doing.  But again, the
design I gave you doesn't necessarily need that - you just need to
properly buffer incomplete data.

Right now secure_read may return any number of available bytes. But in case of using streaming compression, it can happen that available number of bytes is not enough to perform decompression. This is why we may need to try to fetch additional portion of data. This is how zpq_stream is working now.

I do not understand how it is possible to implement in different way and what is wrong with current implementation.

Frankly speaking I do not completely understand the source of your
concern.  My primary idea was to preseve behavior of libpq function as
much as possible, so there is no need to rewrite all places in
Postgres code when them are used.  It seems to me that I succeed in
reaching this goal. Incase of enabled compression zpq_stream functions
(zpq-read/write) are used instead of (pq)secure_read/write and in turn
are using them to fetch/send more data. I do not see any bad flaws,
encapsulation violation or some other problems in such solution.

So before discussing some alternative ways of embedding compression in
libpq, I will want to understand what's wrong with this approach.
You're destroying the existing model for no reason.

Why? Sorry, I really do not understand why adding compression in this way breaks exited model.
Can you please explain it to me once again.

   If you needed to, I
could understand some argument for the way you've done it, but what I've
tried to tell you is that you don't need to do so.  It's longer this
way, and it *significantly* complicates the (already difficult to reason
about) connection state machine.

I get that rewriting code can be obnoxious, and it feels like a waste of
time when we have to do so.  (I've been there; I'm on version 19 of my
postgres patchset.)

I am not against rewriting code many times, but first I should understand the problem which needs to be solved.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Reply via email to