On 2020-10-20 12:24, Dave Cramer wrote:
    Finally, we could do it an a best-effort basis.  We use binary format
    for registered types, until there is some invalidation event for the
    type, at which point we revert to default/text format until the end
    of a
    session (or until another protocol message arrives re-registering the
type).
Does the driver tell the server what registered types it wants in binary ?

Yes, the driver tells the server, "whenever you send these types, send them in binary" (all other types keep sending in text).

    This should work, because the result row descriptor contains the
    actual format type, and there is no guarantee that it's the same one
    that was requested.

    So how about that last option?  I imagine a new protocol message, say,
    TypeFormats, that contains a number of type/format pairs.  The message
    would typically be sent right after the first ReadyForQuery, gets no
response.
This seems a bit hard to control. How long do you wait for no response?

In this design, you don't need a response.

    It could also be sent at any other time, but I expect that to
    be less used in practice.  Binary format is used for registered
    types if
    they have binary format support functions, otherwise text continues to
    be used.  There is no error response for types without binary support.
    (There should probably be an error response for registering a type that
    does not exist.)

I'm not sure we (pgjdbc) want all types with binary support functions sent automatically. Turns out that decoding binary is sometimes slower than decoding the text and the on wire overhead isn't significant. Timestamps/dates with timezone are also interesting as the binary output does not include the timezone.

In this design, you pick the types you want.

--
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to