Tom Lane wrote:
Barry Lind <[EMAIL PROTECTED]> writes:

One addition I would personally like to see (it comes up in my apps code) is the ability to detect wheather the server is big endian or little endian. When using binary cursors this is necessary in order to read int data.


Actually, my hope is to eliminate that business entirely by
standardizing the on-the-wire representation for binary data; note the
reference to send/receive routines in the original message.  For integer
data this is simple enough: network byte order will be it.  I'm not sure
yet what to do about float data.


Great.



2) Better support for domains. Currently the jdbc driver is broken with regards to domains (although no one has reported this yet). The driver will treat a datatype that is a domain as an unknown/unsupported datatype. It would be great if the T response included the 'base' datatype for a domain attribute so that the driver would know what parsing routines to call to convert to/from the text representation the backend expects.


I'm unconvinced that we need do this in the protocol, as opposed to
letting the client figure it out with metadata inquiries.  If we should,
I'd be inclined to just replace the typeid field with the base typeid,
and not mention the domain to the frontend at all.  Comments?


I don't have a strong opinion on this one. I can live with current functionality. It isn't too much work to look up the base type.



So I would request the ability of the client to set a max rows parameter for query results. If a query were to return more than the max number of rows, the client would be given a handle (essentially a cursor name) that it could use to fetch additional sets of rows.


How about simply erroring out if the query returns more than X rows?

This shouldn't be an error condition. I want to fetch all of the rows, I just don't want to have to buffer them all in memory. Consider the following example. Select statement #1 is 'select id from foo', statement #2 is 'update bar set x = y where foo_id = ?'. The program logic issues statement #1 and then starts iterating through the results and the issues statement #2 for some of those results. If statement #1 returns a large number of rows the program can run out of memory if all the rows from #1 need to be buffered in memory. What would be nice is if the protocol allowed getting some rows from #1 but not all so that the connection could be used to issue some #2 statements.


4) Protocol level support of PREPARE. In jdbc and most other interfaces, there is support for parameterized SQL. If you want to take advantage of the performance benefits of reusing parsed plans you have to use the PREPARE SQL statement.


This argument seems self-contradictory to me.  There is no such benefit
unless you're going to re-use the statement many times.  Nor do I see
how pushing PREPARE down to the protocol level will create any
improvement in its performance.

There is a benefit if you do reuse the statement multiple times. The performance problem is the two round trips minimum to the server that are required. A protocol solution to this would be to allow the client to send multiple requests at one time to the server. But as I type that I realize that can already be done, by having multiple semi-colon separated SQL commands sent at once. So I probably have everything I need for this already. I can just cue up the 'deallocate' calls and piggyback them on to the next real call to the server.


So what I would like to see is the ability for the client to set a MAX VALUE size parameter. The server would send up to this amount of data for any column. If the value was longer than MAX VALUE, the server would respond with a handle that the client could use to get the rest of the value (in chunks of MAX VALUE) if it wanted to.


I don't think I want to embed this in the protocol, either; especially
not when we don't have even the beginnings of backend support for it.
I think such a feature should be implemented and proven as callable
functions first, and then we could think about pushing it down into the
protocol.


That is fine.



6) Better over the wire support for bytea. The current encoding of binary data \000 results in a significant expansion in the size of data transmitted. It would be nice if bytea data didn't result in 2 or 3 times data expansion.


AFAICS the only context where this could make sense is binary
transmission of parameters for a previously-prepared statement.  We do
have all the pieces for that on the roadmap.

Actually it is the select of binary data that I was refering to. Are you suggesting that the over the wire format for bytea in a query result will be binary (instead of the ascii encoded text format as it currently exists)?

regards, tom lane


I am looking forward to all of the protocol changes.


thanks,
--Barry




---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to