Tom Lane kirjutas K, 12.03.2003 kell 18:19:
> Barry Lind <[EMAIL PROTECTED]> writes:
> > One addition I would personally like to see (it comes up in my apps 
> > code) is the ability to detect wheather the server is big endian or 
> > little endian.  When using binary cursors this is necessary in order to 
> > read int data.
> Actually, my hope is to eliminate that business entirely by
> standardizing the on-the-wire representation for binary data; note the
> reference to send/receive routines in the original message.  For integer
> data this is simple enough: network byte order will be it.  I'm not sure
> yet what to do about float data.

Use IEEE floats or just report the representation in startup packet.

the X11 protocol does this for all data, even integers - the client
expresses a wish what it wants and the server tells it what it gets (so
two intel boxes need not to convert to "network byte order" at both

> > 2) Better support for domains.  Currently the jdbc driver is broken with 
> > regards to domains (although no one has reported this yet).  The driver 
> > will treat a datatype that is a domain as an unknown/unsupported 
> > datatype.  It would be great if the T response included the 'base' 
> > datatype for a domain attribute so that the driver would know what 
> > parsing routines to call to convert to/from the text representation the 
> > backend expects.
> I'm unconvinced that we need do this in the protocol, as opposed to
> letting the client figure it out with metadata inquiries.  If we should,
> I'd be inclined to just replace the typeid field with the base typeid,
> and not mention the domain to the frontend at all.  Comments?
> > So I would request the ability of the client to set a max rows parameter 
> >    for query results.  If a query were to return more than the max 
> > number of rows, the client would be given a handle (essentially a cursor 
> > name) that it could use to fetch additional sets of rows.
> How about simply erroring out if the query returns more than X rows?

Or just using prepare/execute - fetch - fetch - fetch ...

> > 4) Protocol level support of PREPARE.  In jdbc and most other 
> > interfaces, there is support for parameterized SQL.  If you want to take 
> > advantage of the performance benefits of reusing parsed plans you have 
> > to use the PREPARE SQL statement.
> This argument seems self-contradictory to me.  There is no such benefit
> unless you're going to re-use the statement many times.  Nor do I see
> how pushing PREPARE down to the protocol level will create any
> improvement in its performance.

I suspect that he actually means support for binary transmission of
parameters for a previously-prepared statement here.

> > So what I would like to see is the ability for the client to set a MAX 
> > VALUE size parameter.  The server would send up to this amount of data 
> > for any column.  If the value was longer than MAX VALUE, the server 
> > would respond with a handle that the client could use to get the rest of 
> > the value (in chunks of MAX VALUE) if it wanted to.
> I don't think I want to embed this in the protocol, either; especially
> not when we don't have even the beginnings of backend support for it.
> I think such a feature should be implemented and proven as callable
> functions first, and then we could think about pushing it down into the
> protocol.

IIRC, Oracle has such a feature in its support for Large Objects (LONG
datatype). If the object data is longer than xxx bytes you will need
special ized access to it.

also when stepping with single fetches, you will always get handles for
LONG objects, if fetching more than one row you'll get raw data. 

BTW, I'm not advocating such behaviour .


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to