Neil Conway wrote:

My point is that since they are different types, the language itself will need to provide some mechanism for doing this type conversion _anyway_. 'int' and 'long' are used throughout the backend APIs, so I don't see the gain in only converting the SPI functions over to using int32/int64.

Mainly because it's easier to do that mapping knowing that the semantics equipped with the involved types never change. There's also a performance issue. I must map a C/C++ long to a 64bit value at all times and thereby get a suboptimal solution.


An API should ideally hide the internals of the underlying code so I'm not sure this is a valid reason.


Well, the executor allows you to specify a 64-bit count on platforms where "long" is 64-bit, and a 32-bit count otherwise.

Exactly. Why should a user of the SPI API be exposed to or even concerned with this at all? As an application programmer you couldn't care less. You want your app to perform equally well on all platforms without surprises. IMHO, PostgreSQL should make a decision whether the SPI functions support 32-bit or the 64-bit sizes for result sets and the API should reflect that choice. Having the maximum number of rows dependent on platform ports is a bad design.


Regards,
Thomas Hallgren


---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Reply via email to