Tom Lane wrote: > Shachar Shemesh <[EMAIL PROTECTED]> writes: > >> I'll reiterate - the problem is not that PG is exporting the internal >> ARM FP format. The problem is that the server is exporting the internal >> ARM FP format when the server is ARM, and the IEEE format when the >> server is Intel. It's not the format, it's the inconsistency. >> > > This is not a bug, it's intentional. While IEEE-spec floats are > reasonably interchangeable these days (modulo the endianness issue), > other FP formats tend to have different ranges, special cases, etc. > If we try to force them to IEEE spec we may have problems with overflow, > loss of precision, who knows what. > Yes, but if we do not then we have a data interchange library that is useless for data interchange. I think overflow and precision loss is preferable.
Please remember that I'm only trying to help Postgresql here. I have a spec to work with on the outside. I'm more than willing to do what's necessary (see the STRRRANGE date conversion code) in order to adapt whatever PG throws my way to the no less strange representation expected of me. That's what I do as a driver hacker. Sometimes, the specs don't help me. Windows' notion of "timezone free timestamps" is nothing short of a disgrace, and some of the hacks that are needed around that issues are, well, hacks. I don't come complaining here, because this has nothing to do with PG. It's bad design on the other end of the two ends that a driver has to make meet. But sometimes, like now, PG puts me in an impossible position. You are essentially telling me "you will get the numbers in an unknown format, you will not have any way of knowing whether you got them in a strange format or not, nor will you have any docs on what that format is going to be". That is no way to treat your driver developers. > >> Like I said elsewhere, I'm willing to write a patch to "pq_sendfloat8" >> (and probably "pq_getmsgfloat8" too) to make sure it does the conversion >> on ARM platforms. Hell, I think I can even write it portable enough so >> that it will work on all non-IEEE platforms >> > > Really? Will it be faster Absolutely. Do you honestly believe that turning a 64bit binary number into a 40 something byte decimal number will be quicker than turning a 64 bit binary number into another 64 bit number? For one thing, I really doubt that my technique will require division, modulo or, in fact, any math operations at all. It will likely be done with a few bit shifting and that's it. I also find it strange, though, that you berate me for using binary rather than text format, and then complain about speed. That's what makes OLE DB faster than ODBC - binary interface. > and more reliable than conversion to text? > Well, depends on how you define "more reliable". If you define it to mean "exactly represents what happens in the server internals", then the answer is "no". If you define it to mean "make more sense to the client, and have better chances of producing results that more closely approximate the right number than the current code", then the answer is a definite yes. > (In this context "reliable" means "can reproduce the original datum > exactly when transmitted back".) > Who cares? If you are using the same function for binary communication inside the server and for communications to the clients (or, for that matter, another server), then there is something wrong in your design. What are the "send" functions used for, beside server to client communication, anyways? You are asking me to treat the binary data as an opaque. Well, I'll counter with a question - what good is that to me? Please note that the current code is useless for communicating binary data between two servers, even if they are guaranteed to be of the same version! How much less reliable can you get? Please, give your own interface designers something to work with. Your attitude essentially leaves me out in the cold. > regards, tom lane > Shachar ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster