Re: [HACKERS] postgres 8.3.8 and Solaris 10_x86 64 bit problems?
# ./pg_ctl ld.so.1: pg_ctl: fatal: relocation error: R_AMD64_32: file /usr/local/postgres64/lib/libpq.so.5: symbol (unknown): value 0xfd7fff1cf210 does not fit Killed symbol (unknown). Can you turn on debugging symbols? Knowing the symbol may point to a library that was not compiled properly. So I run 'ldd pg_ctl' to see if everything is linking ok. And I'm wondering if there is a problem with libpq.so.5 as mentioned in the original error # file /usr/local/postgres64/lib/libpq.so.5 /usr/local/postgres64/lib/libpq.so.5: ELF 64-bit LSB dynamic lib AMD64 Version 1 [SSE CMOV], dynamically linked, not stripped Ok. So looking good. Maybe there is a library or header libpq needs that I'm missing in 64 bit? # ldd /usr/local/postgres64/lib/libpq.so.5 Are you sure that all pg_ctl referenced libraries and all libpq.so referenced libraries were built as 64-bit using PIC? Are you linking with any static library that may contain 32-bit objects? That error is most commonly PIC or arch-mismatch. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postgres 8.3.8 and Solaris 10_x86 64 bit problems?
I'll hack the makefile and give it a shot. No need to hack it, set CFLAGS during configure: shell]# CFLAGS=-m64 ./configure (options) -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq - extending PQexecParams/PQexecPrepared to specify resultFormat for individual result columns
Ivo Raisr wrote: Hi guys, I hacked PostgreSQL 8.4.1 libpq and added a new function to specify resultFormat for individual result columns. Are you interested in a patch? (http://www.postgresql.org/docs/8.4/interactive/libpq-exec.html says: There is not currently a provision to obtain different result columns in different formats, although that is possible in the underlying protocol. So I made this possible.) Can you explain the use case for this? Have you investigated libpqtypes? -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
2. adds the possibility to specify a payload parameter, i.e. executing in SQL NOTIFY foo 'payload'; and 'payload' will be delivered to any listening backend. Thank you for implementing this- LISTEN/NOTIFY without a payload has been a major problem to work around for me. +1 -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
/* + * This function is executed for every notification found in the queue in order + * to check if the current backend is listening on that channel. Not sure if we + * should further optimize this, for example convert to a sorted array and + * allow binary search on it... + */ + static bool + IsListeningOn(const char *channel) I think a bsearch would be needed. Very busy servers that make heavy use of notifies would be quite a penalty. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
Joachim Wieland wrote: On Thu, Nov 12, 2009 at 1:04 AM, Andrew Chernow a...@esilo.com wrote: I think a bsearch would be needed. Very busy servers that make heavy use of notifies would be quite a penalty. In such an environment, how many relations/channels is a backend typically listening to? Do you have average / maximal numbers? We have a system where many libpq clients, ~2000 - 4000 per server (depends on hardware), maintain a persistent backend connection. Each connection listens for notifies, LISTEN 'client_xxx'. There are maybe 10 different reasons why a NOTIFY 'client_xxx' is fired. Sometimes, notifies are broadcasted to all client connections, or just portions of them. The goal is real-time messaging to a large groups of computers/devices. Past 4000, the problem is distributed to a second, third, etc... server. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
Premature optimization is the root of all evil ;-). Unless you've done some profiling and can show that this is a hot spot, making it more complicated isn't the thing to be doing now. I'm thinking of how our system uses/abuses notifies, and began wondering if several thousand backends listening with a large queue would perform decently behind a linear search. At this point, I have no data either way; only an assumption based off being burnt by sequential scans in the past ;) -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
I thought of a compromise: add the number of times a notification was generated (coalesced count+1) to the callback data. That would satisfy any backwards compatibility concerns and my use case too! If you are suggesting that the server poke data into the notifier's opaque payload, I vote no. Maybe the NOTIFY command can include a switch to enable this behavior. No syntax suggestions at this point. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
2. The payload parameter is optional. A notifying client can either call NOTIFY foo; or NOTIFY foo 'payload';. The length of the payload is currently limited to 128 characters... Not sure if we should allow longer payload strings... Might be a good idea to make the max the same as the max length for prepared transaction GUIDs? Not sure anyone would be shipping those around, but it's a pre-existing limit of about the same size. I don't see any reason to impose such a small limit on payload size. Surely some limit must exist, but 128 characters seems awfully small. I already have at few places in mind that would require more bytes. Why not 8K, 64K, 256K, 1M or even more? Is there some other factor in play forcing this limitation? -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
2. The payload parameter is optional. A notifying client can either call NOTIFY foo; or NOTIFY foo 'payload';. The length of the payload is currently limited to 128 characters... Not sure if we should allow longer payload strings... Might be a good idea to make the max the same as the max length for prepared transaction GUIDs? Not sure anyone would be shipping those around, but it's a pre-existing limit of about the same size. Yes, sounds reasonable to have the same limit for user-defined identifiers... [..begging..] Can this be increased significantly? I don't get it, is there any technical reason to make the limit soo small? This drastically reduces the usefulness of the payload. I've wanted this feature for quite sometime and it is quite disappointing that I could not even use it because it is unjustifiably limited. One use case I need is making the payload an absolute path, which saves us a round trip (commonly internet latency) and a query in a section of the system that's extremely performance sensitive. That sure ain't going to fit in 128 bytes. I'm sure I'm not the only one who finds this limit too small. I can almost guarentee complaints would come in if released that way. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
What advantage is there in limiting it to a tiny size? This is a 'payload' after all...an arbitrary data block. Looking at the patch I noticed the payload structure (AsyncQueueEntry) is fixed length and designed to lay into QUEUE_PAGESIZE (set to) BLCKSZ sized pages. H. Looks like the limitation comes from slru. The true payload limit is (8K - struct members) the way this is implemented. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
Now you might say that yeah, that's the point, we're trying to enable using NOTIFY in a different style. The problem is that if you are trying to use NOTIFY as a queue, you will soon realize that it has the wrong semantics for that --- in particular, losing notifies across a system crash or client crash is OK for a condition notification, not so OK for a message queue. The difference is that the former style assumes that the authoritative data is in a table somewhere, so you can still find out what you need to know after reconnecting. If you are doing messaging you are likely to think that you don't need any backing store for the system state. I simply don't agree that the semantics have to change. You call it a queue, I call it sesison data. There is no reason why the documentation can't state that notifies may not be delivered due to crashes, so make sure to use persistent storage for any payload worth keeping post-session. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
However I share Greg's concerns that people are trying to use NOTIFY as a message queue which it is not designed to be. When you have an established libpq connection waiting for notifies it is not unreasonable to expect/desire a payload. ISTM, the problem is that the initial design was half-baked. NOTIFY is event-driven, ie. no polling! -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
and we should stop. The world contains infinite amounts of lameness, but that's the world's problem, not ours. There is zero evidence that +1 this feature is only useful for stupid purposes, and some evidence (namely, the opinions of esteemed community members) that it is useful for at least some non-stupid purposes. The unexpected application of a feature can be creative or innovative, which I firmly believe is something this community embraces. How many ways can a screw driver be used ... think MacGyver :) Deteriming whether it's creative vs. stupid would require an understanding of the context in which it was applied. For example, using our screw driver to remove a splinter would be rather stupid, IMHO ;) -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
spill to disk and need an efficient storage mechanism. The natural implementation of this in Postgres would be a table, not the slru. If This is what I think the people's real problem is, the implementation becomes a more complex when including payloads (larger ones even more so). I think its a side-track to discuss queue vs condition variables. Whether a notify is 20 bytes through the network or 8192 bytes doesn't change its design or purpose, only its size. Calling this a creeping feature is quite a leap. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
Calling this a creeping feature is quite a leap. It's true that the real creep is having the payload at all rather than not having it. Not having the payload at all is like santa showing up without his bag of toys. Instead, you have to drive/fly to the north pole where he just came from to get them. One person described stuffing the payload with the primary key of the record being invalidated. This means the requirements have just gone from holding at most some small fixed number of records bounded by the number of tables or other shared data structures to holding a large number of records bounded only by the number of records in their tables which is usually much much larger. Now you're talking about making the payloads variable size, which means you need to do free space management within shared pages to keep track of how much space is free and available for reuse. So we've gone from a simple hash table of fixed size entries containing an oid or name datum where we expect the hash table to fit in memory and a simple lru can handle old pages that aren't part of the working set to something that's going to look a lot like a database table -- it has to handle reusing space in collections of variable size data and scale up to millions of entries. And I note someone else in the thread was suggesting it needed ACID properties which makes space reuse even more complex and will need something like vacuum to implement it. I think the original OP was close. The structure can still be fixed length but maybe we can bump it to 8k (BLCKSZ)? -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
Tom Lane wrote: Greg Sabino Mullane g...@turnstep.com writes: Talk of efficiency also seems silly here - using shared memory is already way more efficient than our current listen/notify system. Except that the proposed implementation spills to disk. Particularly if it has to have support for large payloads, it could very well end up being a lot SLOWER than what we have now. True, but do you really consider it to be a common case that the notify system gets soo bogged down that it starts to crawl? The problem would be the collective size of notify structures + payloads and whether that would fit in memory or not. This leads me to believe that the only safety in smaller payloads is *possibly* a smaller chance of bogging it down, but that all depends on the usage pattern of smaller vs. larger payloads which is system specific. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Listen / Notify rewrite
My original intention was to have the queue as a circular buffer where the size of the entries was variable, but possibly bounded. Certainly using fixed length records of large size seems somewhat wasteful. Maybe we should do away with 'spill to disk' all together and either hard-code an overflow behavior or make it a knob. Possible overflow behaviors could be block until space is available, throw an error or silently drop it. Can the size of the shared memory segment for notifications be configurable? That would allow those with large payloads or a huge number of notifications to bump memory to avoid overflow cases. By removing the disk and making shmem usage configurable, I think the notify system would be flexible and could scale nicely. Another added benefit is the payload limit can be much higher than previously considered, because memory/performance concerns are now in the hands of the DBA. Incidentally, I'd like to thank Joachim personally for having done this work, +1 -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Greg Sabino Mullane wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I think you should conduct a wider survey before you make that decision. In particular, I'd like to hear from driver writers like Greg Sabino Mullane and Jeff Davis, as well as regular libpq users. I can state that there would be almost zero chance this would ever be used by DBD::Pg, as it would seem to add overhead with no additional functionality over what we already have. Unless I'm misreading what it does and someone can make the case why I should use it. - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 200804081349 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iEYEAREDAAYFAkf7sGkACgkQvJuQZxSWSsi8FgCgkGUGh2ieOAtvNlXX6orjO8oc 0bIAoPF7ojxM1c38kw7+L4Ar7FRZmdrn =U2BM -END PGP SIGNATURE- This idea is for the libpq user, although driver writers could find it handy as well. Really, anyone who uses libpq directly. That's the real audience. I don't know what overhead greg is referring to. How is DBD::pg handling arrays of composites? Are you parsing text output? Wouldn't it be less overhead to avoid text parsing and transmit binary data? no additional functionality over what we already have What about user-defined type registration, sub-classing user or built-in type handlers, handling of all data types in binary. There is a lot of new functionality added by this patch to the existing libpq. I don't think the appropriate audience got a look at this, maybe posting on general or libpq lists. From my perspective as a long time C coder, this made my application code cleaner, easier to maintain and faster in many cases. It removed a lot of code that is now handled by this patch. I am not sure why Tom is worried about source code size, normally the concern is linked size. Code comments were never finished, as the library was changing so much to meet some requests. Instead, we focused on providing API documentation and the overall idea (over 1000 lines). This changed much less than the implementation. I think the real issue is simply the wrong audience. Its the coder in the field making heavy use of libpq that would find this appealing, not really backend hackers. It is disappointing because I was excited to here ideas from others, which never happened. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Better support for arrays and composites is certainly something that people might want, but the problem with this design is that it forces them to buy into a number of other decisions that they don't necessarily want. regards, tom lane What decisions are we forcing upon the libpq user? Well, most of the functionality is handled by about 3 functions (putf, getf, and paramexec). The difference is, our patch is not limited to only handling text arrays and composites. It can do it all, which we thought would of been a requirement to get approved. There is a performance boost to handling arrays and composites in binary, which we use a lot because there are no stored procedures (note, not trying to take a jab about stored procedures, just giving an example of how we use and abuse arrays and composites). -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Martijn van Oosterhout wrote: How tight is the link to libpq? Could it exist as a seperate library: libpqbin or something? Still in core, just only used by the people who want it. I gave this a lot of thought and I do think we could abstract this. The idea is to complie it in or out. Add a --with-typesys to configure, which could enable #ifdef LIBPQ_ENABLE_TYPESYS everywhere. If you don't specify --with-typesys, the API calls would still be there but would return ENOSYS, assign an error string or something. This preserves link capatability. This would trim out the 50k everyone is worried about :) I'm sure there are other ways to acocmplish this, but this one seems easiest and keeps it all centralized. Just like --with-openssl, except that loads libcrypto.so. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Martijn van Oosterhout wrote: How tight is the link to libpq? Could it exist as a seperate library: libpqbin or something? Still in core, just only used by the people who want it. I gave this a lot of thought and I do think we could abstract this. The idea is to complie it in or out. Add a --with-typesys to configure, which could enable #ifdef LIBPQ_ENABLE_TYPESYS everywhere. If you don't specify --with-typesys, the API calls would still be there but would return ENOSYS, assign an error string or something. This preserves link capatability. This would trim out the 50k everyone is worried about :) I'm sure there are other ways to acocmplish this, but this one seems easiest and keeps it all centralized. Just like --with-openssl, except that loads libcrypto.so. Forgot to say: There is stuff in PGconn, PGresult, PQclear, PQfinish (maybe a couple other places). A separate library would remove the ability to call PQexec followed by PQgetf because the result object is no longer aware of the typesys. You would need the separate library to wrap the result object or something: typesysResult = TypeSysGetResult(PQexec()); Or, you need to wrap the libpq API calls, typesysResult = TypeSysExec();. Both are doable but not nearly as slick as: res = PQexec; PQgetf(res, ..); PQclear(res); -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: I gave this a lot of thought and I do think we could abstract this. The idea is to complie it in or out. [shrug...] So the packagers will compile it out, and you're still hosed, or at least any users who'd like to use it are. Forgot to say: There is stuff in PGconn, PGresult, PQclear, PQfinish (maybe a couple other places). I didn't see anything there that seemed that it *had* to be inside libpq, and certainly not anything that couldn't be handled with a small hook or two. regards, tom lane How about using dlopen and dlsym? Sseparate the library as many are suggesting. But leave the functions, the tid-bits in PGconn, PGresult, etc... in there (2 or 3 typedefs would be needed but all data-type typedefs PGtimestamp can be removed). You need something inside the PGconn and PGresult, even if just a function pointer that gets called if not NULL. Yank pqtypes function bodies, and related helper functions, and replace them with something like below: int PQputf(...) { #ifdef HAVE_DLOPEN if(pqtypes-putf) return pqtypes-putf(...); return 1; /* failed, PGparam error = not enabled */ #else return 1; /* failed, PGparam error = cannot load dynamically */ #endif } Then add a PQtypesEnable(bool), which would dlopen(libpqtypes) and dlsym the 10 functions or so we need. Any typedefs you need would be in libpqtypes.h rather than libpq-fe.h. You could disable it as well, which would unload libpqtypes. The default would obviously be disabled. The entire patch would be one small file with a couple 1 line changes to PGconn and PGresult. This would remove all overhead, at least 95% of it. couldn't be handled with a small hook or two. Maybe, have not thought of that. The problem, is that I am trying to make avoid having to keep track of two different APIs. The goal is the libpq user is coding to the libpq API, not some other API like PGTypesExec. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Alvaro Herrera wrote: Andrew Chernow wrote: Forgot to say: There is stuff in PGconn, PGresult, PQclear, PQfinish (maybe a couple other places). Maybe there's a way we can have libpqtypes adding calls into some hypothetical libpq hooks. So libpqtypes registers its hooks in _init() or some such, and it gets picked up automatically by any app that links to it. Kinda what my last suggestion was. Some tid-bits need to be reside in libpq, but very little. I was thinking PQtypesEnable(bool) which would dlopen libpqtypes and map all functions needed. This would leave the function bodies of PQputf, PQgetf, PQparamExec, etc... as simple proxy functions to the dynamically loaded functions. This removes any bloat that people don't like right now but still allows one to use libpq as the primary interface, rather than having to fiddle with libpq and some other API. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Dunstan wrote: Please make sure that any scheme you have along these lines will work on Windows DLLs too. Ofcourse: LoadLibrary(), GetProcAddress(), __declspec(dllexport). -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Kinda what my last suggestion was. Some tid-bits need to be reside in libpq, but very little. I was thinking PQtypesEnable(bool) which would dlopen libpqtypes and map all functions needed. This would leave the function bodies of PQputf, PQgetf, PQparamExec, etc... as simple proxy functions to the dynamically loaded functions. This removes any bloat that people don't like right now but still allows one to use libpq as the primary interface, rather than having to fiddle with libpq and some other API. This is still 100% backwards. My idea of a libpq hook is something that could be used by libpgtypes *and other things*. What you are proposing is something where the entire API of the supposed add-on is hard-wired into libpq. That's just bad design, especially when the adequacy of the proposed API is itself in question. When I say I'd accept some hooks into libpq, I mean some hooks that could be used by either libpgtypes or something that would like to do something roughly similar but with a different API offered to clients. The particular hook that you seem to mostly need is the ability to allocate some private memory associated with a particular PGconn object, and maybe also with individual PGresults, and then to be able to free that at PQclear or PQfinish. Try designing it like that. regards, tom lane My idea was not a response to your hook idea. It was different altogether. My idea is trying to create one interface where some parts need to be enabled (nothing wrong with that design, this is a plugin-like model). Your idea creates two interfaces where one of them can hook into the other. These are two different concepts. the question is, are you asking for two different interfaces or are you just looking to minimize overhead. I thought it was the latter which is why I proposed a plugin-like model. It removes bloat as well as maintains a single interface. Nothing wrong with the design unless it doesn't meet your requirements. Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: When I say I'd accept some hooks into libpq, I mean some hooks that could be used by either libpgtypes or something that would like to do something roughly similar but with a different API offered to clients. The particular hook that you seem to mostly need is the ability to allocate some private memory associated with a particular PGconn object, and maybe also with individual PGresults, and then to be able to free that at PQclear or PQfinish. Try designing it like that. regards, tom lane Your method would work as well. The only issue is you still have the same issue of binary distributed libpqs. Would redhat distribute a binary linked with libpqtypes? If not, you have the same issue of the end-user having to compile libpq ... passing -lpqtypes to the linker. If redhat did link it, you run into the disk space complaint all over again. My suggestion was trying to work around this by dynamically loading the library, PQtypesEnable(TRUE). In this model, redhat doesn't even have to distribute libpqtypes.so (considering the disk space issue). It could be easily be an additional download. All you need are some proxy functions inside libpq, PQputf calling a dynamically loaded function. This passes the disk space complaints and doesn't require a re-compile if an end-user wants to use it. Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Bruce Momjian wrote: I don't see requiring users to add -lpqtypes to use these functions as a problem. The idea is that the default libpq would have enough hooks that you could use it without modification. You are correct, brain fart on my part. Not sure where my mind was at but I scratch those commit about redhat linking in libpqtypes. Sorry about that. I think the hook idea would work. Have to look at composites and arrays, they create their own result objects from scratch. Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
I think a wise thing would be for the patch submitters to give a _basic_ outline of what PQparam is trying to accomplish --- I mean like 10-20 lines with a code snippet, or code snippet with/withouth PQparam. I found this posting from August, 2007 but it isn't short/clear enough: That is very old. There are tons of examples if you download the patch and look at some of the documentation .txt files. FYI, it might be interesting to extend it to cover what ecpg wants --- we have been looking for a way to get the database-dependent parts of ecpg out of the ecpg directory and this might be a solution, _even_ if it makes your library larger. I am not all to familiar with ecpg. Our idea is nothing more than data type converters, there are no type operators. Our goal is to leverage the binary parameterized API and offer the ability to get C types from an enhanced PQgetvalue, PQgetf. PQgetf understands the external binary format of the backend, so it can convert that to C types. It also understand how to convert text output from the backend. This allows existing applications using text results to drop in PQgetf here and there. You don't have to use PGparam or PQputf at all to use PQgetf. PQputf, allows one to pack parameters into a PQparam object that can be executed at another time. It knows how to take C types and convert them to the backend's external binary format. The patch always sends data in binary format, but can get results in text or binary. Along the way, we devised a way for user-defined types to be used. This introduced PQregisterTypeHandler. This function can allow one to sub-class existing type handlers, like extending %timestamp and making %epoch. It allows registering user-defined types. the type will be looked up in the backend and resolved properly. It also allows one to create aliases .. simple domains. Let's say you have a C type, typedef int user_id;. You could register an alias where user_id=int4. Now you can putf and getf %user_id. this abstracts code from possible integer width changes, you just change your typedef and registration code to user_id=int8. Take a peak at the documentation files in the patch, root of tar *.txt files. they are very verbose and have loads of examples. The regression-test.c file has lots of use cases. latest: http://www.esilo.com/projects/postgresql/libpq/typesys-0.9b.tar.gz To sum it up, the main concepts are PQputf, PQgetf and PQregisterTypeHandler. The other functions maintain a PGparam or exec/send one. -- Andrew Chernow eSilo, LLC http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
This patch has an identity crisis. We initially called it PGparam (possibly mispelled several times as PQparam) and then changed it to libpq type system (typesys). Several on patches started to call it libpqtypes, even I did. Any objections to fixing the name to libpqtypes? Any thoughts on the hooking suggested by Tom? It sounds like it should be generic enough so more than just libpqtypes can make use of it. I think something of this nature should have input before I do anything. Possible Hook points: (at least ones needed by libpqtypes) conn_create conn_reset conn_destroy result_create result_destroy I guess libpqtypes would have to maintain a map of conns and results? Right now it can associate type info because we added members to conn and result. When conn_create(conn) is called, libpqtypes would need to map this by pointer address (as it is all it has as an identifier). Still feels like maybe there should be a void* in a conn and result used for per-connection/result based info (libpqtypes or not). -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Jeff Davis wrote: On Tue, 2008-04-08 at 15:22 -0400, Tom Lane wrote: Well, for starters, using binary format. It is undeniable that that creates more portability risks (cross-architecture and cross-PG-version issues) than text format. Not everyone wants to take those risks for benefits that may not be meaningful for their apps. What are the cross-architecture risks involved? Regards, Jeff Davis What are the cross-architecture risks involved? We didn't run into any issues here. close attention was paid to byte ordering, the rest was handled by libpq's cross-platform handling. cross-PG-version is a different story. the patch already uses server version to toggle (like use int4 for money pre-8.3). If we make this a separate library, it would be easy to plug in a newer or older one. Merlin had some ideas here ... Merlin? Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Any thoughts on the hooking suggested by Tom? It sounds like it should be generic enough so more than just libpqtypes can make use of it. I think something of this nature should have input before I do anything. Possible Hook points: (at least ones needed by libpqtypes) conn_create conn_reset conn_destroy result_create result_destroy I guess libpqtypes would have to maintain a map of conns and results? Right now it can associate type info because we added members to conn and result. When conn_create(conn) is called, libpqtypes would need to map this by pointer address (as it is all it has as an identifier). Still feels like maybe there should be a void* in a conn and result used for per-connection/result based info (libpqtypes or not). Well, I can get it working with a very small patch. We actually don't need very much in libpq. Although, making it somehow generic enough to be useful to other extensions is a bit tricky. Please, suggestions would be helpful. Below is a raw shell of an idea that will work for libpqtypes. Start by removing our entire patch and then add the below: // libpqtypes only needs the below. could add op_reset, // op_linkerror, etc... enum { HOOK_OP_CREATE, HOOK_OP_DESTROY }; struct pg_conn { // everything currently in a pg_conn // ... // libpqtypes needs HOOK_OP_DESTROY, a ptr to hookData // is always used in case the hooklib needs to allocate // or reallocate the hookData. void *hookData; int (*connHook)(PGconn *conn, int op, void **hookData); } struct pg_result { // everything currently in a pg_result . // libpqtypes needs create destroy // conn is NULL for destroy void *hookData; int (*resultHook)(PGconn *conn, PGresult *result, int op, void **hookData); } freePGconn(PGconn *conn) { // ... if(conn-connHook) conn-connHook(conn, HOOK_OP_DESTROY, conn-hookdata); // ... } PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status) { // ... (result allocated here) if(result-resultHook) result-resultHook(conn, result, HOOK_OP_CREATE, result-hookData); // ... } PQclear(PGresult *result) { // ... if(result-resultHook) result-resultHook(NULL, result, HOOK_OP_DESTROY, result-hookdata); // ... } // library wide int PQregisterHooks(connHook, resultHook); -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Well, I can get it working with a very small patch. We actually don't need very much in libpq. Although, making it somehow generic enough to be useful to other extensions is a bit tricky. Please, suggestions would be helpful. Below is a raw shell of an idea that will work for libpqtypes. Start by removing our entire patch and then add the below: // libpqtypes only needs the below. could add op_reset, // op_linkerror, etc... enum { HOOK_OP_CREATE, HOOK_OP_DESTROY }; struct pg_conn { // everything currently in a pg_conn // ... // libpqtypes needs HOOK_OP_DESTROY, a ptr to hookData // is always used in case the hooklib needs to allocate // or reallocate the hookData. void *hookData; int (*connHook)(PGconn *conn, int op, void **hookData); } struct pg_result { // everything currently in a pg_result . // libpqtypes needs create destroy // conn is NULL for destroy void *hookData; int (*resultHook)(PGconn *conn, PGresult *result, int op, void **hookData); } There is no need to pass hookData to the hook function. libpqtypes already accesses PGconn and PGresult directly so it can just access the hookData member. int (*connHook)(PGconn *conn, int op); int (*resultHook)(PGconn *conn, PGresult *result, in top); -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
There are many cases that are fairly hard to get a perfect mapping. If you use PGtime, for instance, there are no C operators for it Yeah, our patch is designed to allow one to interface with libpq using C data types, rather than strings (most common) or for the brave external binary format. If you try to use the C time type instead, you will have many more operators available, but will lose precision. That is why we didn't use something like struct tm. Instead, we have PGtime, PGdate and PGtimestamp. PGtimestamp has a PGtime and PGdate in it. See libpq-fe.h inside our patch. /* Below is a PGtime. */ typedef struct { /* The number of hours past midnight, in the range 0 to 23. */ int hour; /* The number of minutes after the hour, in the range 0 to 59. */ int min; /* The number of seconds after the minute, in the range 0 to 59. */ int sec; /* The number of microseconds after the second, in the * range of 0 to 99. */ int usec; /* * When non-zero, this is a TIME WITH TIME ZONE. Otherwise, * it is a TIME WITHOUT TIME ZONE. */ int withtz; /* A value of 1 indicates daylight savings time. A value of 0 indicates * standard time. A value of -1 means unknown or could not determine. */ int isdst; /* Seconds east of UTC. This value is not always available. It is * set to 0 if it cannot be determined. */ int gmtoff; /* Timezone abbreviation: such as EST, GMT, PDT, etc. This value is * not always available. It is set to an empty string if it cannot be * determined. It can also be in ISO 8601 GMT+/-hhmmss format. */ char tzname[16]; } PGtime; Our patch was not designed to operate on these data types, although it could with a little work. It sounds like people would like this functionality, some referring to ecpg. Numeric is a the odd ball in our patch. We saw no benefit to supplying a struct for it so we always expose numerics as strings (put or get). Internally, we convert the string to binary format for puts. On binary gets, we convert the external format to a numeric string. We left it to the libpq user to strtod or use a Big Number library. And I think NULL is still particularly tricky PQgetisnull is used by our code. It doesn't error out, but after zeroing the provided user memory, it just returns. The libpq user should check isnull where they care about it. Almost all of the types that required a PGstruct, are very solid mappings. For instance, geo types are straight forward, not much to a PGpoint. Geo type structs were kept very close to the backend. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Dunstan wrote: Merlin Moncure wrote: However, due to libpq limitations, if any datatype must return text the entire result must be text (resultFormat)...this is I'm surprised you didn't try to address that limitation. That would change the existing behavior of resultFormat, although not terribly. Currently, the server will spit back an error if you use binary results but some type hasn't implemented a send/recv. Instead of an error, the server could fallback to the type's in/out routines and mark the column as text format. I think the fallback approach is more intelligent behavior but such a change could break libpq clients. They might be blindly ASSuming if the exec worked with resultFormat=1, that everything returned by PQgetvalue will be binary (I'm guilty of this one, prior to libpqtypes). Our patch would work with no changes because it supports text and binary results. So, each type handler already toggles itself based on PQfformat. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Merlin Moncure wrote: On Wed, Apr 9, 2008 at 8:04 AM, Andrew Dunstan [EMAIL PROTECTED] wrote: \ Merlin Moncure wrote: However, due to libpq limitations, if any datatype must return text the entire result must be text (resultFormat)...this is also interestingly true for functions that return 'void'. So, at present, to use PQgetf, you result set must be binary. I'm surprised you didn't try to address that limitation. whoops! we did...thinko on my part. Text results are fully supported save for composites and arrays. merlin Yeah, currently composites and arrays only support binary results in libpqtypes. This forces any array elementType or any member of a composite to have a send/recv routine. Using the fallback to text output approach, this limitation on array elements and composite members would be removed. That makes it sound more like a protocol limitation, rather than a libpq limitation. Or am I misunderstanding? It looks like libpq, message 'T' handling in getRowDescriptions, reads a separate format for each column off the wire. The protocol already has this ability. The backend needs a ?minor? adjustment to make use of the existing capability. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Yeah, currently composites and arrays only support binary results in libpqtypes. This forces any array elementType or any member of a composite to have a send/recv routine. Using the fallback to text output approach, this limitation on array elements and composite members would be removed. Actually, I am confusing the way the protocol handles arrays and composites (as a single column value, vs. the way libpqtypes handles these (as a separate result object). for instance, the members of a composite inherit the format of the column within the protocol. To allow one member of a composite to be text formatted and another be binary, would require a change to the protocol, an additional format value per member header. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Well, I can get it working with a very small patch. We actually don't need very much in libpq. Although, making it somehow generic enough to be useful to other extensions is a bit tricky. Please, suggestions would be helpful. Quick question on the hook concept before I try to supply a new patch. From my experience, redhat normally compiles everything into their packages; like apache modules. Why would libpq be any different in regards to libpqtypes? If they don't distribute libpqtypes, how does a libpq user link with libpqtypes? They don't have the library. Where would they get a libpqtypes.so that is compatible with redhat's supplied libpq.so? The core of what I am trying to ask is, there doesn't appear to be an advantage to separating libpqtypes from libpq in terms of space. If redhat follows their normal policy of include all (probably to make their distro as feature rich out-of-the-box as possible), then they would distribute libpqtypes.so which would use the same amount of space as if it were part of libpq. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Andrew Chernow wrote: Well, I can get it working with a very small patch. We actually don't need very much in libpq. Although, making it somehow generic enough to be useful to other extensions is a bit tricky. Please, suggestions would be helpful. Quick question on the hook concept before I try to supply a new patch. From my experience, redhat normally compiles everything into their packages; like apache modules. Why would libpq be any different in regards to libpqtypes? If they don't distribute libpqtypes, how does a libpq user link with libpqtypes? They don't have the library. Where would they get a libpqtypes.so that is compatible with redhat's supplied libpq.so? The core of what I am trying to ask is, there doesn't appear to be an advantage to separating libpqtypes from libpq in terms of space. If redhat follows their normal policy of include all (probably to make their distro as feature rich out-of-the-box as possible), then they would distribute libpqtypes.so which would use the same amount of space as if it were part of libpq. By the way, I offered up the idea of compiling our patch in or out, ./configure --with-pqtypes. But Tom had said [shrug...] So the packagers will compile it out, and you're still hosed, or at least any users who'd like to use it are. If redhat would compile out our patch, no questions asked, why would they distribute libpqtypes.so. Meaning, separating it out doesn't seem to unhose us :) -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Dunstan wrote: I don't get what you're not seeing about this. All I am trying to say is, redhat's core packages are normally very inclusive. Like apache, which includes many/most modules in the core package. I am still not convinced there is merit to a separate library. But honestly, I'm indifferent at this point. If the community wants it this way, whether or not I agree with the reasons, then consider it done. It would be helpful to get some feedback about what the requirements are for a hooking mechanism for libpqtypes: 1. Do we make the hooking system generic, for other library to use? 2. If #1 is YES, then does this hooking system need to allow registering multiple hooks per app instance. Meaning, should we implement a table of callbacks/hooks? Like a linked list of them. 3. What design is preferred? *) A single msg proc that uses a msg object which is either a big union or something that gets up casted. *) A separate function pointer per hook, like conn_create, conn_destroy *) A combo, where conn hooks are handled by one funcptr and result hooks by another. But each only has one function. Please think carefully about question #1, as this could lead to over-design or guess-design. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Bruce Momjian wrote: I think Andrew Chernow is fundamentally confused about dynamic linking, which apache has to use because it doesn't know what type of file it has to handle, with linking, which is bound to the application code. pgtypes is bound to the application code so it is not like apache --- an application isn't going to be fed arbitrary pgtypes function calls it has to handle. This discussion is now completely pointless as we have conceeded to a separate library. The community has spoken. We are trying to move on and open a discussion on the hook design. there are numbered questions: http://archives.postgresql.org/pgsql-hackers/2008-04/msg00565.php -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: There is no need to pass hookData to the hook function. libpqtypes already accesses PGconn and PGresult directly so it can just access the hookData member. That's a habit you'd really be best advised to stop, if you're going to be a separate library. Otherwise, any time we make an internal change in the PGconn struct, it'll break your library --- and we have and will feel free to do that without an external soname change. What parts of PGconn/PGresult do you need to touch that aren't exposed already? regards, tom lane Well, we manually create a result for arrays and composites. We also use pqResultAlloc in several places. I will have to look at the code again but I'm not sure we can be completely abstracted from the result struct. If you are suggesting that libpqtypes should not access internal libpq, than this idea won't work. We can pull all the code out and hook in, as you suggested, but we had no plans of abstracting from internal libpq. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: What parts of PGconn/PGresult do you need to touch that aren't exposed already? Don't need direct access to PGconn at all. result: null_field tupArrSize client_encoding (need a PGconn for this which might be dead) pqSetResultError pqResultAlloc pqResultStrdup Also, we basically need write access to every member inside a result object ... since we create our own for arrays and composites by copying the standard result members over. /* taken from dupresult inside handlers/utils.c */ PGresult *r = (PGresult *)malloc(sizeof(PGresult)); memset(r, 0, sizeof(PGresult)); /* copy some data from source result */ r-binary = args-get.result-binary; r-resultStatus= args-get.result-resultStatus; r-noticeHooks = args-get.result-noticeHooks; r-client_encoding = args-get.result-client_encoding; strcpy(r-cmdStatus, args-get.result-cmdStatus); [snip...] We can read args-get.result properties using PQfuncs with no problem. But we have no way of assign these values to our result 'r'. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: The key phrase in that being some way or another. Red Hat works with a concept of core vs extras (or another way to look at it being what comes on the CDs vs what you have to download from someplace). I think it's highly likely that libpgtypes would end up in extras. If you do not make it possible to package it that way (ie, separately from libpq), it's more likely that it won't get packaged at all than that it will be put in core. regards, tom lane I'll buy this. Better to be safe then sorry. This patch becomes more flexible and distributable when separated. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: That's exactly what I'm strongly suggesting. If you need to include libpq-int.h at all, then your library will be forever fragile, and could very well end up classified as don't ship this at all, it's too likely to break. regards, tom lane I see your point, and it has a lot of merit. We am completely open to hearing how this can be solved. How do we duplicate a result object and customize many member values after the dup? Do we create a PGresultInfo struct containing all members of a result object that gets passed to PGresuolt *PQresultDup(PGresult *source, PGresultInfo *info);? Maybe it has a flags member that indicates which PQresultInfo members contain values that should override the source result. Any suggestions? This is where we are stumped. Everything else has several solutions. We are not debating this anymore, we are trying to implement it. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Hmm. I guess it wouldn't be completely out of the question to expose the contents of PGresult as part of libpq's API. We haven't changed it often, and it's hard to imagine a change that wouldn't be associated with a major-version change anyhow. We could do some things to make it a bit more bulletproof too, like change cmdStatus from fixed-size array to pointer to allocated space so that CMDSTATUS_LEN doesn't become part of the API. Alternatively, we could keep it opaque and expose a bunch of manipulator routines, but that might be a lot more cumbersome than it's worth. regards, tom lane How about a proxy header (if such an animal exists). Maybe it is possible to take pg_result, and all structs it references, and put it in result-int.h. libpq-int.h would obviously include this. Also, any external library, like libpqtypes, that need direct access to a result can include result-int.h. This keeps pg_result and referenced structs out of libpq-fe.h. From a libpq user's viewpoint, nothing has changed. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Perhaps we could do a partial exposure, where the exported struct declaration contains public fields and there are some private ones after that. I have another idea. It would remove a boat load of members that would need to be exposed (may remove them all). Can we make a variant of PQmakeEmptyPGresult? Maybe something like this: PGresult *PQdupPGresult( // maybe not the best name? PGconn *conn, PGresult *source, int numAttributes, int ntups); This starts off by calling PQmakeEmptyPGresult and then copying the below members from the provided 'source' result argument. - ExecStatusType resultStatus; - char cmdStatus[CMDSTATUS_LEN]; - int binary; - PGNoticeHooks noticeHooks; - int client_encoding; It would then preallocate attDescs and tuples which would also set numAttributes, ntups tupArrSize. attdescs and tuples are not duplicated, only allocated based on the 'numAttributes' and 'ntups' arguments. Now libpqtypes only needs direct access to attDescs and tuples, for when it loops array elements or composite fields and copies stuff from the source result. Any interest in this direction? I am still thinking about how to abstract attDesc and tuples, I think it would require more API calls as well. curBlock, curOffset and spaceLeft were never copied (intialized to zero). We don't touch these. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: Tom Lane wrote: Perhaps we could do a partial exposure, where the exported struct declaration contains public fields and there are some private ones after that. I have another idea. It would remove a boat load of members that would need to be exposed (may remove them all). Can we make a variant of PQmakeEmptyPGresult? Maybe something like this: Here is a quick implementation demonstrating the idea. It is very similar to the patches internal dupresult function (handlers/utils.c). /* numParameters, paramDescs, errFields, curBlock, curOffset and spaceLeft * are not assigned at all, initialized to zero. errMsg is handled by * PQmakeEmptyPGresult. */ PGresult *PQdupPGresult( PGconn *conn, PGresult *source, int numAttributes, int ntups) { PGresult *r; if(!source || numAttributes 0 || ntups 0) return NULL; r = PQmakeEmptyPGresult(conn, source-resultStatus); if(!r) return NULL; r-binary = source-binary; strcpy(r-cmdStatus, source-cmdStatus); /* assigned by PQmakeEmptyPGresult when conn is not NULL */ if(!conn) { r-noticeHooks = source-noticeHooks; r-client_encoding = source-client_encoding; } r-attDescs = (PGresAttDesc *) pqResultAlloc(r, numAttributes * sizeof(PGresAttDesc), TRUE); if(!r-attDescs) { PQclear(r); return NULL; } r-numAttributes = numAttributes; r-tuples = (PGresAttValue **) malloc(ntups * sizeof(PGresAttValue *)); if(!r-tuples) { PQclear(r); return NULL; } r-ntups = ntups; r-tupArrSize = ntups; return r; } -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: What parts of PGconn/PGresult do you need to touch that aren't exposed already? Don't need direct access to PGconn at all. Oh, good, that makes things much easier. Shoot! Feels like you always miss something. The patch uses PGconn's PQExpBuffer to set errors on a conn. Currently, there is no access to this buffer other than PQerrorMessage. Is the below okay? extern PQExpBuffer *PQgetErrorBuffer(PGconn *conn); // OR PQsetErrorMessage(conn, errstr) -- this felt strange to me The expbuffer API is public, so managing the returned PQExpBuffer would not require any additional API calls. While I am on the subject of things I missed, the patch also uses pqSetResultError (mostly during PQgetf). We no longer need direct access to anything inside pg_result. However, we would need 3 new API calls that would dup a result, set field descs and add tuples to a result. Below are prototypes of what I have so far (small footprint for all 3, maybe 100-150 lines). /* numParameters, paramDescs, errFields, curBlock, * curOffset and spaceLeft are not assigned at all, * initialized to zero. errMsg is handled by * PQmakeEmptyPGresult. attDescs and tuples are not * duplicated, only allocated based on 'ntups' and * 'numAttributes'. The idea is to dup the result * but customize attDescs and tuples. */ PGresult *PQresultDup( PGconn *conn, PGresult *source, int ntups, int numAttributes); /* Only for results returned by PQresultDup. This * will set the field descs for 'field_num'. The * PGresAttDesc struct was not used to avoid * exposing it. */ int PQresultSetFieldDesc( PGresult *res, int field_num, const char *name, Oid tableid, int columnid, int format, Oid typid, int typlen, int typmod) /* Only for results returned by PQresultDup. This * will append a new tuple to res. A PGresAttValue * is allocated and put at index 'res-ntups'. This * is similar to pqAddTuple except that the tuples * table has been pre-allocated. In our case, ntups * and numAttributes are known when calling resultDup. */ int PQresultAddTuple( PGresult *res, char *value, int len); The above would allow an external app to dup a result and customize its rows and columns. Our patch uses this for arrays and composites. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Gregory Stark wrote: Andrew Chernow [EMAIL PROTECTED] writes: Shoot! Feels like you always miss something. The patch uses PGconn's PQExpBuffer to set errors on a conn. Currently, there is no access to this buffer other than PQerrorMessage. Is the below okay? Surely you would just provide a function to get pqtypes errors separate from libpq errors? That's extremely akward. Consider the below. int getvalues(PGresult *res, int *x, char **t) { return PQgetf(res, get the int and text); } if(getvalues(res, x, t)) printf(%s\n, PQresultErrorMessage(res)); How would the caller of getvalues know whether the error was generated by a libpqtypes API call or by a libpq API call (like PQgetvalue)? PQgetf should behave exactely as PQgetvalue does, in regards to errors. Same holds true for PGconn. Normally, you are using PQputf which sets the error in PQparamErrorMessage. Then there is PQparamCreate(conn) or PQparamExec(conn, param, ...) and friends? PQparamExec calls PQexecParams internally which can return NULL, setting an error message inside PGconn. We felt our light-weight wrappers should be consistent in behavior. If you get a NULL PGresult for a return value, PQerrorMessage should be checked. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: /* Only for results returned by PQresultDup. This * will append a new tuple to res. A PGresAttValue * is allocated and put at index 'res-ntups'. This * is similar to pqAddTuple except that the tuples * table has been pre-allocated. In our case, ntups * and numAttributes are known when calling resultDup. */ int PQresultAddTuple( PGresult *res, char *value, int len); PQresultAddTuple is not quite correct. The below is its replacement: int PQresultSetFieldValue( PGresult *res, int tup_num, int field_num, char *value, int len) Recap: PQresultDup, PQresultSetFieldDesc and PQresultSetFieldValue. We feel this approach, which we initially thought wouldn't work, is better than making pg_result public. These functions could be useful outside of pqtypes, providing a way of filtering/modifying a result object ... consider: PGresult *execit(conn, stmt) { res = PQexec(conn, stmt); if(some_app_cond_is_true) { newres = PQresultDup(); // ... customize tups and fields //PQresultSetFieldDesc and PQresultSetFieldValue PQclear(res); res = newres; } return res; } // res could be custom built res = execit(conn, stmt); PQclear(res); This is not an everyday need, but I'm sure it could provide some niche app functionality currently unavailable. Possibly useful to libpq wrapper APIs. Either way, it works great for pqtypes and avoids exposing pg_result. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Recap: PQresultDup, PQresultSetFieldDesc and PQresultSetFieldValue. We feel this approach, which we initially thought wouldn't work, is better than making pg_result public. That doesn't seem to me to fit very well with libpq's internals --- in particular the PQresult struct is not designed to support dynamically adding columns, which retail PQresultSetFieldDesc() calls would seem to require, and it's definitely not designed to allow that to happen after you've begun to store data, which the apparent freedom to intermix PQresultSetFieldDesc and PQresultSetFieldValue calls would seem to imply. Instead of PQresultSetFieldDesc, I think it might be better to provide a call that creates an empty (of data) PGresult with a specified tuple descriptor in one go. regards, tom lane Are you against exposing PGresAttDesc? PGresult *PQresultDup( PGconn *conn, PGresult *res, int ntups, int numAttributes, PGresAttDesc *attDescs); If you don't want to expose PGresAttDesc, then the only other solution would be to pass parallel arrays of attname[], tableid[], columnid[], etc... Not the most friendly solution, but it would do the trick. Recap: Use new PQresultDup, throw out setfielddesc and keep setfieldvalue the same. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: PQresultErrorMessage at this point --- if you haven't already checked the PGresult to be okay, you should certainly not be trying to extract fields from it. So I don't see that you are saving any steps here. Correct, I agree. Our thinking was flawed. The issue we face is that, unlike getvalue, getf can fail in many ways ... like bad format string or type mismatch. If you would rather us introduce a pqtypes specific error for getf. putf doesn't suffer from this issue because it uses PGparamErrorMessage. Same holds true for PGconn. I'm not convinced about that side either. It doesn't fail the basic const-ness test, perhaps, but it still looks mostly like you are trying to avoid the necessity to think hard about how you're going to report The issue is not a matter of know-how, it is a matter of creating ambiguos situations in regards to where the error is (I'm thinking from a libpq user's perspective). This only applies to PQparamExec and fiends, NOT PQputf. All existing exec/send functions put an error in the conn (if the result is NULL check the conn). paramexec can fail prior to internally calling PQexecParams, in which case it returns NULL because no result has been constructed yet. The question is, where does the error go? res = paramexec(conn, param, ... if(!res) // check pgconn or pgparam? // can conn have an old error (false-pos) Not using the conn's error msg means that one must check the param and conn if the return value is NULL. I think the best behavior here is to check PQerrorMessage when any exec function returns a NULL result, including pqtypes. If not, I think it could get confusing to the API user. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: PGresult *PQresultDup( PGconn *conn, PGresult *res, int ntups, int numAttributes, PGresAttDesc *attDescs); I don't understand why this is a dup operation. How can you dup if you are specifying a new tuple descriptor? I'd have expected something like PGresult *PQmakeResult(PGconn *conn, int numAttributes, PGresAttDesc *attDescs) producing a zero-row PGRES_TUPLES_OK result that you can then load with data via PQresultSetFieldValue calls. (Even the conn argument is a bit of a wart, but I think we probably need it so we can copy some of its private fields.) Copying an existing PGresult might have some use too, but surely that can't change its tuple descriptor. regards, tom lane Yeah, dup wasn't the best name. You need more arguments to makeresult though, since you reomved the 'source' result. You need binary, cmdStatus, noticeHooks and client_encoding. PQmakeResult(conn, PQcmdStatus(res), PQbinaryTuples(res), ?client_encoding?, ?noticeHooks?, ntups, /* this interacts with setfieldvalue */ numAttributes, attDescs); For client_encoding and noticeHooks, the conn can be NULL. Previously, we copied this info from the source result when conn was NULL. producing a zero-row PGRES_TUPLES_OK result that you can then load with data via PQresultSetFieldValue calls. I like this idea but you removed the 'ntups' argument. There is no way to adjust ntups after the makeresult call, its a private member and setfieldvalue has no concept of incrementing ntups. Since you are not appending a tuple like pqAddTuple, or inserting one, you can't increment ntups in setfieldvalue. But, you could have a function like PQresultAddEmptyTuple(res) which would have to be called before you can set field values on a tup_num. The empty tuple would grow tuples when needed and increment ntups. The new tuple would be zerod (all NULL values). something like below res = PQmakeResult(...); for(ntups) { PQresultAddEmptyTuple(res); // or PQresultMakeEmptyTuple? for(nfields) PQresultSetFieldValue(res, tup_num, field_num, ); } -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: For client_encoding and noticeHooks, the conn can be NULL. Previously, we copied this info from the source result when conn was NULL. Looks like we don't need client_encoding. My guess is, in our use case noticeHooks can probably be NULL for the created result, when makeresult is not provided a conn. something like below res = PQmakeResult(...); for(ntups) { PQresultAddEmptyTuple(res); // or PQresultMakeEmptyTuple? for(nfields) PQresultSetFieldValue(res, tup_num, field_num, ); } I think a better idea would be to make setfieldvalue more dynamic, removing the need for PQresultAddEmptyTuple. Only allow tup_num to be = res-ntups. This will only allow adding one tuple at a time. The other option would allow arbitray tup_nums to be passed in, like ntups is 1 but the provided tup_num is 67. In this case, we would have to back fill. It seems better to only grow by one tuple at a time. We can get it working either way, we just prefer one tuple at a time allocation. int PQresultSetFieldValue( PGresult *res, int tup_num, int field_num, char *value, int len) { if(!check_field_value(res, field_num)) return FALSE; if(tup_num res-ntups) // error, tup_num must be = res-ntups if(res-ntups = res-tupArrSize) // grow res-tuples if(tup_num == res-ntups !res-tuples[tup_num]) // need to allocate tuples[tup_num] // blah ... blah ... } New PQmakeResult prototype: PQmakeResult( PGconn *conn, const char *cmdStatus, int binaryTuples, int numAttributes, PGresAttDesc *attDescs); -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] libpq type system 0.9a
Andrew Chernow wrote: res = paramexec(conn, param, ... if(!res) // check pgconn or pgparam? // can conn have an old error (false-pos) We will just always dump the error message into the param. If PQexecParams fails with a NULL result, we will copy PQerrorMessage over to param-errMsg. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pulling libpqtypes from queue
This patch has been lingering around since Aug 2007. It has matured a lot and now calls libpq home. Unfortunately, ISTM that there is limited support for our proposal. We either pitched to the wrong crowd or pqtypes doesn't have the mass appeal we expected. With that said, we are considering shopping this elsewhere ... ie. pgfoundry. At this point, we feel this has gone in the wrong direction. Our goal was to add some pqtype api calls to libpq (directly or as stubs). We now find ourselves implementing a hook API which is opposite of our proposal. We did try to make it work, but there are many dirty details that make it unworkable and beneath the quality of the rest of the postgresql project. Even though the library size issue was solved, by using stub funcs and dlopen, it appears there are other reasons for rejecting pqtypes. BTW, all we have heard in regards to stub funcs are crickets. pqtypes is bowing out for now :( If the community wants to run with it, we will help in any way we can. We will continue to manage this ourselves; most likely as stub funcs. Keep in mind, we were using pqtypes (in a raw form) for 8 months before submitting it to the community. We thought it would be useful to others and we wanted to give back. We appreciate everyone's willingness to get this working. We respect the fact that with little interest (or good old distaste), everyone was still willing to add api calls to make this work. We appreciate this and think it demonstrates one of the community's strengths. Thanks again for your time and suggestions. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
Merlin Moncure wrote: On Tue, Apr 15, 2008 at 10:48 AM, [EMAIL PROTECTED] wrote: On Tue, Apr 15, 2008 at 09:48:37AM -0400, Merlin Moncure wrote: On Tue, Apr 15, 2008 at 9:36 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: I expect you intend to get at least the hooks in, right? not likely. Keep in mind, this is not how we really wanted to do things in the first place. We don't think this is the right strategy for integrating libpqtypes with libpq. It over-complicates things and we don't really see a use case outside of libpqtypes. If a reasonable case can be made for putting the hooks in, we will consider it. Can you think of any good reasons for hooking libpq outside of our intentions? Yes, this one comes to mind: From: sanjay sharma Subject: Submission of Feature Request : RFC- for Implementing Transparent Data Encryption in Postgres http://archives.postgresql.org/pgsql-hackers/2008-03/msg01231.php I know that the original poster wanted to encrypt and decrypt things server-side, but as was noted in the thread this doesn't make that much sense because the decryption keys must be somehow kept around there. But for doing it transparently client-side such libpq hooks might come in handy... libpqtypes was designed to handle this with our without hooking. (the 'hooking' debate was mainly about exactly how libpq and libpqtypes was going to be separated). libpqtypes had a superclassing concept (not much discussed on the lists) where you could introduce new type handlers that wrapped existing ones and was desgined exactly for things like this. keep an eye on our upcoming pgfoundry project. merlin libpqtypes should be up on pgfoundry shortly. /* register a new type named secure_text. Make it a * sub-class of bytea. Subclass means you can extend * the behavior of another type. */ PQregisterTypeHandler(conn, secure_text=bytea, put_secure_text, get_secure_text); int put_secure_text(PGtypeArgs *args) { char secure_buf[1024]; char *text = va_arg(args-ap, char *); size_t len = encrypt(secure_buf, text, strlen(text)); // tell bytea super class to put resulting bytea value return args-super(args, len, secure_buf); } int get_secure_text(PGtypeArgs *args) { size_t len; char *secure_text; size_t user_buflen = va_arg(args-ap, size_t); char *user_buf = va_arg(args-ap, char *); // ask the super class, bytea, to get the bytea value if(args-super(args, len, secure_text) == -1) return -1; decrypt(user_buf, user_buflen, secure_text, len); return 0; } /* put a secure_text into a pgparam, which will encrypt it * and send it to the server as a bytea. The * is a * pointer flag telling pqtypes to not make an internal copy * of the string (keep a direct pointer). */ PQputf(param, %secure_text*, encrypt me); /* put it in the db, the last argument is resultFormat */ PQparamExec(conn, param, INSERT INTO t VALUES ($1), 1); /* get a secure_text from a result, which will decrypt * the bytea field and return text to user. */ char text[1024]; PQgetf(res, tup_num, %secure_text, field_num, sizeof(text), text); You could make the variable arguments for %secure_text include an encryption key if you want. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
Martijn van Oosterhout wrote: On Tue, Apr 15, 2008 at 11:24:43AM -0400, Andrew Chernow wrote: libpqtypes had a superclassing concept (not much discussed on the lists) where you could introduce new type handlers that wrapped existing ones and was desgined exactly for things like this. keep an eye on our upcoming pgfoundry project. /* register a new type named secure_text. Make it a * sub-class of bytea. Subclass means you can extend * the behavior of another type. */ All I can say is, way cool. Pity you couldn't get the necessary support but if you can put it up in a way so it can be easily packaged by distributors that'd be cool. I can think of some projects that could really benefit from a feature like this. Have a nice day, We are working on a version to put on pgfoundry. This version will add stub functions to libpq and have the meat of the pqtypes in separate shared library: loadable via PQtypesLoad(){ dlopen(libpqtypes.so); dlsym(pqt_paramExec); dlsym(pqt_paramCreate); // etc... } Maybe we'll get enough support in the future to get our stubs into core. Until then, you'll have to patch libpq. We plan to provide binary patches for the major platforms. You won't have to patch the guts of pqtypes though because that is dynamically loaded behind PQtypesLoad ... you just need libpqtypes.[so|dll] in your libpath. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
Merlin Moncure wrote: For posterity, here are our objections to hooking libpq: *) Is there any other plausible scenario of another use for hooking into libpq events? (this is rhetorical...we don't think there is.) We think that there is a better way to integrate libpqtypes with libpq so maybe the abstraction is unnecessary. *) keeping PQparamExec friends outside of libpq makes error handling a little awkward...we expect to use TLS errors in libpqtypes (which, quite frankly, I wish libpq used) but think it would be cleaner to handle errors in consistent fashion with libpq...libpqtypes adds PQseterror, PQgeterror. *) We especially don't like having to explicitly install into every PGconn (PQaddObjectHooks). So, an app that wants to be ported to using PQgetf for example, needs to locate and inject code into all places connections are made, rather than just inject the call. We would rather have things just 'work'. *) In the event pqtypes becomes popular, will it remain a hooked library forever? If not (a tighter integration that we are advocating takes place), then we are stuck with the 'hook' api functions forever, unless this happens before 8.4 gets out. merlin Installing it per-conn doesn't get you anything. pqtypes has already been linked in. If you use PQexec and PQgetvalue, the pqtypes code pretty much does nothing. So, a per-conn install seems redundant. You are installing the same function pointers under the same name over and over. If you link with, it should just be available. #include libpqtypes.h // library-wide init PQtypesInit(void); // libpqtypes is ready for use on any conn That is what we would prefer. We tried to do it with a global array and a lock, but that has its own problems (namely, all the locking). -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Installing it per-conn doesn't get you anything. pqtypes has already been linked in. If you use PQexec and PQgetvalue, the pqtypes code pretty much does nothing. So, a per-conn install seems redundant. You are installing the same function pointers under the same name over and over. If you link with, it should just be available. I don't really agree with that argument; it's not impossible that you'd want it on some connections and not others. The server version for instance could affect the choice. Per-conn install also gets you out of a bunch of thread safety issues (because we've already decreed that it's the app's problem to ensure that only one thread touches a PGconn at a time). regards, tom lane AFAICS, thread-safety is the big problem. I didn't really like the locking code either -- I was grimacing while typing the code. Server version is handled by the fact that state is not global, only the hook callbacks are. If you don't use a pqtypes function, there is no memory or performance overhead (the real overhead was added at link time). An application has to toggle based on PQserverVersion, to install or not install the pqtypes objhooks. Although, more toggling is needed to choose PQgetvalue over PQgetf for instance. But, the community has held their ground on using hooks. We tried to sell our stubs idea but no one was buying. So, per-conn hooks it is. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
Merlin Moncure wrote: Maybe if there was PQinitGlobalHooks so that all PGconn structs created would inherit the hooks automatically...this allows per conn initialization (if desired) and global initialization which is often easier. As I understand this, no locking is required, except the init function needs to be called before any real libpq code takes place (in threaded environments). merlin That's interesting. I don't see much need for per-conn hooks in regards to pqtypes, but I can definately see a reason for other object hook implementations. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pulling libpqtypes from queue
[EMAIL PROTECTED] wrote: I expect you intend to get at least the hooks in, right? [...] libpqtypes was designed to handle this with our without hooking. (the 'hooking' debate was mainly about exactly how libpq and libpqtypes was going to be separated). libpqtypes had a superclassing concept (not much discussed on the lists) where you could introduce new type handlers that wrapped existing ones and was desgined exactly for things like this. That sounds cool. So in a way you do have the hooks. A patch has been submited for supporting libpq object hooks, which allows one to associate private storage with a PGconn or PGresult object. The hook is called when a conn or result is created or destroyed (PQreset for conn as well). http://archives.postgresql.org/pgsql-patches/2008-04/msg00309.php For libpqtypes, this means it can operate outside of libpq. libpqtypes was initially designed as a direct extension to libpq (internal code), but the community prefered using a generic hook interface that allowed libpqtypes to work externally. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq object hooks -- needs copy result
Merlin Moncure wrote: With an eye towards sideband utility, we are suggesting: PQcopyResult(PGresult *dest, const PGresult *src, int numAttributes, PGresAttDesc *attDescs, int options); Small correction on that prototype. PGresult * PQcopyResult(const PGresult *src, int numAttributes, PGresAttDesc *attDescs, int options); The 'dest' result is returned, not provided as an argument. andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Odd timezone backend output
I am confused about the below results. The backend is in EDT but it is converting timestamps into EST ... excluding NOW(). Regardless of the timezone provided, the backend is dishing out EST. [EMAIL PROTECTED] ~]# uname -a 2.6.9-67.0.4.EL #1 Sun Feb 3 06:53:29 EST 2008 i686 athlon i386 GNU/Linux [EMAIL PROTECTED] ~]# date Thu May 1 09:54:17 EDT 2008 postgres=# select version(); PostgreSQL 8.3devel on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-8) postgres=# set datestyle='Postgres, MDY'; postgres=# select now(); now - Thu May 01 09:28:53.164084 2008 EDT postgres=# select '1997-01-29 12:31:42.92214 EDT'::timestamptz; timestamptz Wed Jan 29 11:31:42.92214 1997 EST postgres=# select '1997-01-29 12:31:42.92214 PST'::timestamptz; timestamptz Wed Jan 29 15:31:42.92214 1997 EST Is this expected behavior? I am not sure if I am missing something or my results are wrong. Is there a setting that needs tweaking? -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd timezone backend output
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: I am confused about the below results. The backend is in EDT but it is converting timestamps into EST ... excluding NOW(). Regardless of the timezone provided, the backend is dishing out EST. Try a date that's actually during the EDT part of the year. regards, tom lane Different systems do different things with Daylight time. For instance: NTFS adjusts winter file times while in daylight savings (A file time of Jan 20 6PM reads Jan 20 7PM while in Daylight Time). Whether that is good or bad is a different story. I don't really have a problem with either, just needed a little clarity. Thanks, -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd timezone backend output
NTFS adjusts winter file times while in daylight savings The only file times we should ever be interested in are surely epoch times, which should be unaffected by time zones. cheers andrew epoch, or at least non-timezone adjusted times, is the way every modern FS stores file times, no one said otherwise. There is a big difference between time storage and display ... I was talking about display. Postgres deals with both so at some point, a file time (or any time) will get displayed. There are different ways of handling daylight display, I have yet to find a technically correct way of doing it. The more I think about it, I personally like the display behavior of NTFS file times over something like EXT3. When I am in EDT, it is useful to have all display times in that zone (regardless of whether that time falls within winter or summer) ... just as they would if I switched to PST. -- andrew chernow eSilo, LLC. --every bit counts -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd timezone backend output
Andrew Sullivan wrote: On Thu, May 01, 2008 at 09:58:09PM -0400, Andrew Chernow wrote: The more I think about it, I personally like the display behavior of NTFS file times over something like EXT3. When I am in EDT, it is useful to have all display times in that zone (regardless of whether that time falls within winter or summer) ... just as they would if I switched to PST. There's a difference between the two cases you're thinking of, though. In the case of you being in EDT, you didn't actually switch time zone. In Eastern time zones, the offset is -05 from UTC for part of the year, and -04 from UTC from part of the year. That's what it _means_ to be in Eastern time: you didn't really switch time zone at all. I'm in America/Toronto in January and in June. My display should show the time correct to my time zone, not according to the offset from UTC. If you really want that, change your time zone to be an offset from UTC rather than a particular zone. (We used to have to differentiate between EST and EDT during the summer months, because Indiana didn't switch. As far as I know, there is no longer a single jurisdiction where the summer time change doesn't happen in Eastern time. If not for historical reasons, I'd argue the name should be changed to Eastern time.) If you switch to Pacific time (and why is it that people say EDT but PST? I don't know of any Pacific time jurisdictions that don't switch, either. Arizona doesn't switch mostly, but they're in Mountain time), the same thing happens. A I prefer offset from UTC, the timezone abbrevs are ambiguos and confusing. If I am in a timezone that is currently 4 hours behind UTC, I would prefer all times to display adjusted by that offset. I understand that technically its eastern time and EDT makes no sense in Jan (NOTE: linux 'date' command rejects this ... should the backend?), but its friendlier to adjust by a single offset (at least me thinks). The individual's perception of time is -0400, thus displaying all times adjusted by that seems logical. My whole preference here is in regards to display times. I always try to see things from a non-technical end-user's perspective. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq support for arrays and composites
Andrew Dunstan wrote: One of the areas where libpq seems to be severely lacking is in handling arrays and composites in query results. I'd like to set about rectifying that. Ideally this would mean that drivers using libpq could easily and reliably deliver such objects suitably structured in their particular languages (e.g. lists and hashes in Perl). One complicating factor I see is that there is no protocol level support for anything other than simple objects - each data value is simply a stream of bytes of a known length. We would therefore need some pretty robust processing to pick apart structured objects. We'll need a new API to handle such objects. I'm thinking of something like: PQarray * PQgetArray( const PGresult *res, int row_number, int column_number); int PQgetArrayNDims(PQarray * array); int PQgetArrayLower(PQarray * array, int dim); int PQgetArrayUpper(PQarray * array, int dim); int PQgetArrayElementLength(PQarray * array, int dim1, ...); bool PQgetArrayElementIsNull(PQarray * array, int dim1, ...); char * PQgetArrayElement(PQarray * array, int dim1, ...); PQcomposite * PQgetComposite(const PGresult *res, int row_number, int column_number); PQcomposite * PQgetArrayElementComposite(PQarray * array, int dim1, ...); int PQgetCompositeNFields(PQcomposite * composite); char * PQgetCompositeFName(PQcomposite * composite, int fnumber); int PQgetCompositeFNumber(PQcomposite * composite, char * fname); Oid PQgetCOmpositeFType(PQcomposite * composite, int fnumber); int PQgetCompositeFieldLength(PQcomposite * , int fnumber); bool PQgetCompositeFieldIsNull(PQcomposite * composite, int fnumber); char * PQgetCompositeField(PQcomposite * composite, int fnumber); Not sure if we need analogs for PQfformat, PQfmod or PQfsize - I suspect not, but right now I'm just thinking out loud. Thoughts? Is this worth doing? cheers andrew libpqtypes already implemented this. It is a different approach but provides the same functionality; with the inclusion of being able to handle every data type. libpqtypes uses the PGresult API for composites and arrays, rather than adding a new set of functions. To support this, one must be able to convert all data types (unless you are only supporting text results) because composites can be made up of any data type. Simple arrays: http://libpqtypes.esilo.com/man3/pqt-specs.html#array Composite arrays: http://libpqtypes.esilo.com/man3/pqt-composites.html EXAMPLE OF GETTING A COMPOSITE: (taken from http://libpqtypes.esilo.com/ home page) /* Let's get a composite. * CREATE TYPE simple AS (a int4, t text); */ PGint4 i4; PGtext text; PGresult *res, *simple; int resultFormat = 1; /* Your composites need to be registered */ PQregisterTypeHandler(conn, simple, NULL, NULL); /* 2nd arg, PGparam, can be NULL if there are no query params. * Composites require binary results, so we can't use PQexec(). */ res = PQparamExec(conn, NULL, SELECT my_simple FROM t, resultFormat); if(!res) fprintf(stderr, ERROR: %s\n, PQgeterror()); /* Get the simple composite, which is exposed as a PGresult. */ PQgetf(res, 0, %simple, 0, simple); PQclear(res); /* no longer needed */ /* Get the simple composite attributes from the simple result. * Reference fields by name by using a '#' rather than a '%'. * The field names are the composite attribute names. */ PQgetf(simple, 0, #int4 #text, a, i4, t, text); PQclear(simple); -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq support for arrays and composites
That makes it quite useless for my intended purpose. I found no more use cases for text results after libpqtypes started to take shape, eventhough libpqtypes supports all data types in text binary excluding arrays and composites. Because of this, adding a text parser for arrays and composites felt like a lot of work for a little gain. libpqtypes is really designed to be a binary interface. The text support offered allows existing applications to use the new interface with results generated by PQexec(), meaning you can use PQgetf w/o having to change code to use PQputf(). If you take another glance at libpqtypes, you may see that result format decisions are pretty well abstracted and there really is no need for text results anymore (okay, I'll catagorize that as an opinion). I also am not particularly enamoured of the libpqtypes way of doing things, which feels rather foreign to me. Not sure we can fix this issue. We made every attempt to keep things familiar ... printf/scanf style. It's a new approach for libpq but an old one for C hacks. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] CommitFest dragging?
Tom Lane wrote: Joshua D. Drake [EMAIL PROTECTED] writes: On Sun, 2008-07-20 at 10:17 -0700, Josh Berkus wrote: It's been 3 weeks since the start of the commitfest, and less than half the patches have been committed or sent back. We seem to be stalled. libpq object hooks, now called libpq events, can be pushed back to the september commitfest. We would love to get it reviewed now but we are deeply involved in a company project and the commitfest seems to be a bit behind. The two combined make for a good push back candidate. Do we need to add this patch to the sept commitfest? How would we remove it from the current one? -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] C Extension woes
Tim Hawes wrote: text * pl_masterkey(PG_FUNCTION_ARGS) { char *e_var = getenv(PGMASTERKEY); size_t length = VARSIZE(e_var) - VARHDRSZ; The VARSIZE macro is for variable length structures, like a text or bytea which contains a length and data member. You are using this macro on a regular C string e_var. Try this instead: size_t length = e_var != NULL ? strlen(e_var) : 0; -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extending varlena
David Fetter wrote: Folks, As the things stored in databases grow, we're going to start needing to think about database objects that 4 bytes of size can't describe. People are already storing video in lo and bytea fields. To date, the sizes of media files have never trended downward. I always find these requests puzzling. Is it really useful to store the data for a jpeg, video file or a 10GB tar ball in a database column? Does anyone actually search for byte sequences within those data streams (maybe if it were text)? I would think that the metadata is what gets searched: title, track, name, file times, size, etc... Database storage is normally pricey, stocked with 15K drives, so wasting that expensive storage with non-searchable binary blobs doesn't make much sense. Why not offload the data to a file system with 7200 RPM SATA drives and store a reference to it in the db? Keep the db more compact and simpler to manage. Andrew Chernow eSilo, LLC -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extending varlena
David Fetter wrote: On Mon, Aug 18, 2008 at 06:09:13PM -0400, Andrew Chernow wrote: David Fetter wrote: Folks, As the things stored in databases grow, we're going to start needing to think about database objects that 4 bytes of size can't describe. People are already storing video in lo and bytea fields. To date, the sizes of media files have never trended downward. I always find these requests puzzling. Is it really useful to store the data for a jpeg, video file or a 10GB tar ball in a database column? It is if you need transaction semantics. Think medical records, etc. Cheers, David. I see that, although developing the middleware between db and fs is rather trival. I think that is the puzzling part. It just feels akward to me to just stuff it in the db. You can do more by distributing. Anyways (back on topic), I am in favor of removing limits from any section of the database ... not just your suggestion. The end-user application should impose limits. Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extending varlena
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Anyways (back on topic), I am in favor of removing limits from any section of the database ... not just your suggestion. The end-user application should impose limits. That's nice as an abstract principle, but there are only so many hours in the day, so we need to prioritize which limits we're going to get rid of. The 4-byte limit on individual Datum sizes does not strike me as a limit that's going to be significant for practical use any time soon. (I grant David's premise that people will soon want to work with objects that are larger than that --- but not that they'll want to push them around as indivisible, store-and-fetch-as-a-unit field values.) regards, tom lane Yeah, my comments were overly general. I wasn't suggesting attention be put on one limit over another. I was only saying that the act of removing a limit (of which many are arbitrary) is most often a good one. andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extending varlena
Jeff Davis wrote: I always find these requests puzzling. Is it really useful to store the data for a jpeg, video file or a 10GB tar ball in a database column? One use case is that it can use the existing postgresql protocol, So can what I am suggesting. How about a user-defined C function in the backend that talks to the fs and uses SPI to sync info with a record? Now the operation is behind a transaction. Yet, one must handle fs orphans from evil crash cases. Just one solution, but other more creative cats may have better ideas. the point is, it can be done without too much effort. A little TLC :) Andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PGparam timestamp question
I am trying to add support for timestamps in our proposed libpq PGparam patch. I ran into something I don't really understand. I wasn't sure if it was my libpq code that was wrong (converts a binary timestamp into a time_t or struct tm) so I tried it from psql. Server is using EST (8.3devel) x86_64 centos 5 TIMESTAMP WITHOUT TIME ZONE appears to be incorrectly adjusting the time. postgres=# create table t (ts timestamp); postgres=# insert into t values (now()); postgres=# select * from t; ts 2007-12-09 08:00:00.056244 postgres=# select ts at time zone 'UTC' from t; timezone --- 2007-12-09 03:00:00.056244-05 Shouldn't this be 13:00 TIMESTAMP WITH TIME ZONE returns the result I would expect. postgres=# create table t (ts timestamp with time zone); postgres=# insert into t values (now()); postgres=# select * from t; ts 2007-12-09 08:00:00.056244 postgres=# select ts at time zone 'UTC' from t; timezone --- 2007-12-09 13:00:00.056244-05 Is this expected/desired behavior? If it is, how are timestamps stored internally for WITHOUT TIME ZONE types? The docs don't really say. They do discuss how WITH TIME ZONE stores things, but WITHOUT is a little blury. Maybe I am missing something simple. Andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PGparam timestamp question
Okay, thanks. So using WITHOUT TIME ZONE basically means, store the provided value as UTC. Meaning, 8AM EST NOW() is stored/treated as 8AM UTC. That explains why my libpq code was getting 3AM for without time zone values. I am using code from src/interfaces/ecpg/pgtypeslib/timestamp.c timestamp2tm(). That uses localtime() after converting the timestamp to an epoch value. I changed this code so that it calls gmtime() for TIMESTAMPOID and localtime() for TIMESTAMPTZOID. Now it works perfectly :) Need to figure out how to handle times outside of the system time_t range. Thanks again, Andrew Robert Treat wrote: On Sunday 09 December 2007 09:44, Andrew Chernow wrote: I am trying to add support for timestamps in our proposed libpq PGparam patch. I ran into something I don't really understand. I wasn't sure if it was my libpq code that was wrong (converts a binary timestamp into a time_t or struct tm) so I tried it from psql. Server is using EST (8.3devel) x86_64 centos 5 TIMESTAMP WITHOUT TIME ZONE appears to be incorrectly adjusting the time. postgres=# create table t (ts timestamp); postgres=# insert into t values (now()); postgres=# select * from t; ts 2007-12-09 08:00:00.056244 postgres=# select ts at time zone 'UTC' from t; timezone --- 2007-12-09 03:00:00.056244-05 Shouldn't this be 13:00 No. 8 AM UTC is 3 AM Eastern. TIMESTAMP WITH TIME ZONE returns the result I would expect. postgres=# create table t (ts timestamp with time zone); postgres=# insert into t values (now()); postgres=# select * from t; ts 2007-12-09 08:00:00.056244 postgres=# select ts at time zone 'UTC' from t; timezone --- 2007-12-09 13:00:00.056244-05 Correspondingly, 8 AM eastern is 1 PM UTC. Is this expected/desired behavior? If it is, how are timestamps stored internally for WITHOUT TIME ZONE types? The docs don't really say. They do discuss how WITH TIME ZONE stores things, but WITHOUT is a little blury. Maybe I am missing something simple. When timestamptzs are converted to timestamp, there is no time adjust, you simply lose the tz offset information: pagila=# select now(), now()::timestamp; -[ RECORD 1 ]-- now | 2007-12-09 11:25:52.923612-05 now | 2007-12-09 11:25:52.923612 If you store without timezone, you lose the original timezone information, so selecting out with time zone simply selects the stored time in the time zone you selected. HTH. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PGparam timestamp question
got it. stored vs. displyed was confusing me. Andrew Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Okay, thanks. So using WITHOUT TIME ZONE basically means, store the provided value as UTC. Meaning, 8AM EST NOW() is stored/treated as 8AM UTC. No, I think you are more confused now than you were before. For both types, the underlying stored value is just a number-of-seconds offset from 2000-01-01 00:00:00. The question is what's the reference time really. For WITHOUT TIME ZONE, what you see is what you get: it's just a date and time, and nobody is actually promising anything about timezone considerations. For WITH TIME ZONE, the convention is that the reference time is 2000-01-01 00:00:00 UTC, and therefore it is accurate to say that the *stored value* is always expressed in UTC. What confuses people is that for display purposes, a TIMESTAMP WITH TIME ZONE value is rotated to your local timezone (as set by the timezone variable) and printed with your local zone offset. But that's not what's really stored. Now, about AT TIME ZONE: that's a trickier operation than it looks. When you start with a timestamp WITH time zone, the meaning is here's a UTC time, give me the equivalent local time in this time zone. What comes out is a timestamp WITHOUT time zone, which means it'll just be printed as-is. When you start with a timestamp WITHOUT time zone, the meaning is here is a local time in this time zone, give me the equivalent UTC time. What comes out is a timestamp WITH time zone, which as we already saw is implicitly UTC inside the system, which is correct. But you have to remember that that value will be rotated back to your local zone for display. I think that extra conversion is what was confusing you to start with. Another point to keep in mind is that if the system is forced to assume something about the timezone of a WITHOUT TIME ZONE value, it will assume your local time zone setting. In particular this happens during forced coercions between WITH and WITHOUT TIME ZONE. So for example, in regression=# select now(), now()::timestamp without time zone; now |now ---+ 2007-12-09 13:21:50.619644-05 | 2007-12-09 13:21:50.619644 (1 row) the two values are in fact different numbers-of-seconds internally. They print the same, but that's because in the first case the timestamp-with-time-zone output routine rotated from UTC to my local zone (EST) during printout. In the second case the same 5-hour offset was applied by the cast to without-time-zone, and then the timestamp-without-time-zone output routine just printed what it had without any magic. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PGparam timestamp question
Or was the code incorrectly used? Hard for me to say, but I think its about caller context. The way I am using it might be different ... hey the function was static ... copy paster be warned! The code appears to be doing the same thing as the backend (with the exclusion of backend stuff like HasCTZSet and forced conversions). I plan to do an extensive test sometime today. So far, I am getting the correct timestamp conversions across the board. Andrew Michael Meskes wrote: On Sun, Dec 09, 2007 at 11:54:25AM -0500, Andrew Chernow wrote: That explains why my libpq code was getting 3AM for without time zone values. I am using code from src/interfaces/ecpg/pgtypeslib/timestamp.c timestamp2tm(). That uses localtime() after converting the timestamp to an epoch value. I changed this code so that it calls gmtime() for TIMESTAMPOID and localtime() for TIMESTAMPTZOID. Now it works perfectly :) Does this mean pgtypeslib is buggy? This code has been taken from the backend ages ago, so some changes might have occured that I'm not aware of. Or was the code incorrectly used? Michael ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] PGparam proposal
We will have a 0.6 patch tomorrow. This is not a patch, its a proposal. The implementation has been adjusted and is now a simple printf-style interface. This is just a design proposal to see if people like the idea and interface. Up to this point, we have not provided a formal proposal; just a few patches with some explainations. We would appreciate feedback! DESIGN PROPOSAL This proposal extends libpq by adding a printf style functions for sending and recveiving through the paramterized interface. In addition, a number of structs were introduced for storing the binary version of built-in pgtypes. RATIONALE *) Removes the need to manually convert values to C types. *) Simplifies use of binary interface, putting or getting values *) Provide simple structures for many pgtypes, such as polygon, which are not documented for client use. *) Promotes use of parameterized API, which has performance and security benefits. *) Support for arrays is a major plus; w/o parsing or dealing with the binary format. *) Only requires 4 new functions to exports.txt. INTERFACE *) PQputf *) PQgetf *) PQexecParamsf *) PQsendQueryParamsf NOTE: Only PQputf and PQgetf are required for this interface to work. With that in mind, the other two are really cool :) int PQputf(PGconn *conn, const char *paramspec, ...); PQputf offers a way of packing pgtypes for use with the parameterized functions. One or more values can be put at the same time. The params are stored within the PGconn struct as a PGparam structure (internal API only). The paramspec describes the pgtypes that you want to put. In the paramspec, anything other than a valid conversion specifiers is ignored. %n4, [EMAIL PROTECTED] %n8 is treated the same way as %n4%n8. Once all params have been put, one of four paramterized functions that are aware of PGparam can be used: * PQexecParams * PQexecPrepared * PQsendQueryParams * PQsendQueryPrepared For a list of PQputf conversion specifiers, see format_spec.txt. Example: PGpoint pt = {1.2, 4.5}; /* This puts an int4, int8, point and a text */ PQputf(conn, %n4 %n8 %gp %cT, 100, 123LL, pt, text); /* execute: Only the conn, command and resultFormat args are used. */ PQexecParams(conn, INSERT INTO t VALUES ($1,$2,$3,$4), 0, NULL, NULL, NULL, NULL, 1); int PQgetf( const PGresult *res, int tup_num, const char *fieldspec, ...); PQgetf offers a way of getting result values from binary results. It currently offers the ability to get from text results as well, but we are not sure this should be supported. PQgetf is really a way of getting binary results. In the fieldspec, anything other than a valid conversion specifier is ignored. %n4, [EMAIL PROTECTED] %n8 is treated the same way as %n4%n8. For a list of PQgetf conversion specifiers, see format_spec.txt. Example: int i4; long long i8; PGpoint pt; char *text; /* From tuple 0, get an int4 from field 0, an int8 from field 1, a point * from field 2 and a text from field 3. */ PQgetf(res, 0, %n4 %n8 %gp %cT, 0, i4, 1, i8, 2, pt, 3, text); PUT EXEC We also propose two other functions that allow putting parameters and executing all in one call. This is basically a wrapper for PQputf + exec/send. These are the natural evolution of PQputf. extern PGresult *PQexecParamsf( PGconn *conn, const char *cmdspec, int resultFormat, ...); extern int PQsendQueryParamsf( PGconn *conn, const char *cmdspec, int resultFormat, ...); Example: int format = 1; PGpoint pt = {1.2, 4.5}; /* 2 step example */ PQputf(conn, %n4 %n8 %gp %cT, 100, 123LL, pt, text); PQexecParams(conn, INSERT INTO t VALUES ($1,$2,$3,$4), 0, NULL, NULL, NULL, NULL, 1); /* 1 step example */ PQexecParamsf(conn, INSERT INTO t VALUES (%n4, %n8, %gp, %cT,), format, 100, 123LL, pt, text); This causes the four params to be put. Then the parameterized function arrays are built and the below query is executed. INSERT INTO t VALUES ($1, $2, $3, $4) If you use PQputf prior to execf/sendf, then those parameters are included. Doing this is basically appending more params during the exec/send call. PQputf(conn, %n4, 100); PQexecParamsf(conn, INSERT INTO t VALUES (%cT, $1), format, text); Resulting query assigns an int4 to $1 and a text to $2. INSERT INTO t VALUES ($2, $1) andrew merlin For putf or getf, the conversion specifier is a % followed by a two character encoding. The first character indicates the type class while the second character identifies the data type within that class. The byteaptr and textptr are really bytea and text. The ptr extension indicates that only a pointer assignment should occur rather than a copy. Most of the below types are already implemented. Some are still being worked on. Character types: cc char ct text, varchar, char cT textptr Boolean types: bb bool Numeric Types: n2 int2 n4 int4 n8 int8 nf float4 nd float8 nn numeric Bytea types: Bb bytea BB
Re: [HACKERS] PGparam proposal
For starters, if binary results is a feature you wish you could uninvent then we are probably dead in the water for that reason. This goes to the core of our concept. If there is no desire to synch client server in regards to type handling, than this a waste of time. I think all of this would make libpq more powerful. With that said, my follow up: I think a printf-style API is fundamentally a bad idea in this context. printf only works well when the set of concepts (datatypes, format specifiers, etc) is small and fixed; As of now, including numbers and alpha chars, there are 62 classes of which we used 11 (a class is the first char). That leaves 51 classes with 62 types per class. Where I would agree with you, is that over time things can become cryptic. There are only several ways to do this. 1. Use Oids - doesn't work because they are not publically exposed by libpq. 2. Create a function for each type. This idea was rejected because it bloated the API (I'll buy this). 3. Type aliasing schema - naturally, printf-style comes to mind but may become cryptic. I should mention that we were only trying to support built-in pgtypes in libpq. If you are looking to support all external types, then we propose the below: For #3, maybe it would be better to abandon %c or %cc type encoding and move into something more verbose. We could just spell out the type: %int4, %point, etc... Maybe for built-in types you could prepend 'pg': %pgint4, %pgpoint. This opens up the namespace and removes scalability and cryptic issues. Expanding on %pgint4 idea, 3rd party types can supply their own %typname handlers (a more moduler approach). You can install them at runtime, PQinstallTypeHandler(typname, etc..), or something like that. When a 3rd party %typname is encountered, the appropriate handler would be used. Standard pgtypes would be installed by default. PQinstallTypeHandler(... gisbox2d ...); PQputf(... %gisbox2d %pgpolygon %pgint4 ...); //PQgetf would use the same %typname The only thing libpq should support by default, is the built-in pgtypes. A handler can expand on this. I find the idea of embedding state like that into the PGconn to be pretty horrid, as well. It makes the design non-reentrant You can't just randomly change the behavior of existing API functions. Okay. We initially had the PGparam as a public opaque, but changed it. We will stop piggy backing off the existing parameterized functions. Instead, we will supply a PGparam exec/send functions. typedef struct pg_param PGparam;//opaque param = PQparamCreate(conn); PQputf(param, %pgint4 %pgtimestamptz, 62, tstz); res = PQparamExec(conn, param, command, resfmt); //PQparamExec will always PQparamClear(param), whether it failed or not //That means after an exec/send, the param object is ready for puts PQparamFinish(param); // free it // This causes the below sequence of function calls: // PQparamCreate, PQputf, PQexecParams(... VALUES ($1)), PQparamFinish res = PQparamExecf(conn, resfmt, INSERT INTO t VALUES (%pgint), 62); * the 8.2-to-8.3 change in the width of type money We have code comments throughout the patches, as well as documented in the release notes. At this point, we solved getf by checking PGgetlength. If its 4, read4 otherwise read8. For putf, we would have to check the server version. * the likely future change to type timestamptz to store original timezone explicitly We would have to change how timestamptz handles the binary format from that version forward, looks like a switch on sversion for back+forwards compatibility. * the likely future change to type text to store encoding/collation info explicitly Would the server do text conversion and then pass the converted text back to the client? Or, would it choose a common encoding like UTF-8 and return the text with encoding and let the client convert. How does this affect text format? In the end, some of these changes would change the text format right? That would push these changes into the API users lap, to parse and fuss with. I just think it is cleaner to synch the binary and/or text formats with the server. If you are looking for ways to change the binary/text format of types w/o having to make the most recent clients aware of this, then I think we have lost this battle. Another solution is revamping utils/adt so that it is a shared API for client server. If you upgrade a client, you would automatically get the latest formatting functions. Just like libpq checks protocol version in several places, conn-sversion would have to be checked, or maybe have a typefmt_api_version. andrew merlin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] PGparam proposal
library users would have no idea whether the values they provide are being sent as text or binary. The putf interface currently abstracts how it actually sends it. Although, you do put a C type rather than a string. There is a putstr %pqtypstr, which puts the string representation of a type. providing an easier-to-use API for PQexecParams and friends, and what parts are actually interested in binary data transmission (and why) There are two things happening here and two things we are trying to solve: 1. putf and PGparam - simpler interface for executing queries using parameterized functions. 2. getf - doesn't use PGparam at all. You pass it a PGresult. The goal of this is to translate either text or binary results into a uniform C type or structure. For instance, we expose the below structure for inet. /* This struct works with CIDR as well. */ typedef struct { /* convenience, value the same as first 2 bytes of sa_buf */ unsigned short sa_family; /* mask, ie. /24 */ int mask; int is_cidr; /* Cast to: sockaddr, sockaddr_in, sockaddr_in6, sockaddr_stroage */ int sa_len; char sa_buf[128]; /* avoid referencing sockaddr structs */ } PGinet; // res could be text or binary results, but PGinet remains the same. PGinet inet; PGgetf(res, tup_num, %pginet, field_num inet); connect(sock, (const struct sockaddr *)inet.sa_buf, (socklen_t)inet.sa_len); The above is simpler than examining 10.0.0.1/32 and converting it to a struct sockaddr. The same struct is used when putting a type as a C type rather than as a string. You can use getf without ever using putf+PGparam, and vise-versa. your proposal completely glossed over the issue of exactly what data structure would be exposed to clients for anything more complex than an integer Complex types require a receiving structure on the client side, thus the types we added to libpq-fe.h: PGinet, PGpolygon, PGarray, PGtimestamp, etc... But keep in mind that these structs are the result of libpq extracting the data from text/binary formats and assigning data to struct members. It does not expose raw format, the API user can already get that via PQgetvalue(). If we could have libpq insulate client apps from these kinds of changes, that would be one thing; This is the goal of getf. The API user interfaces through PGinet, not through the output of PQgetvalue(). We propose that its libpq's job internally to handle changes to text or binary formats and expose consistent types/structures. type NUMERIC (say, as a bignum integer plus exponent instead of the current BCD-ish format) This is what we want to hide inside libpq's getf. Just expose a PGnumeric that has been translated into c types. We never expose the wire format, only the C translated version of it. without any reliance on binary data transmission whatsoever. Yes this is possible, but at a performance ease-of-use cost. Our performance tests didn't show much gain with strings or ints, but complex types were 10 times faster (putting/getting arrays, polygons, paths, etc...). So, the trade-off is a little more maintainence overhead on libpq. BTW, previously we mentioned a 3rd party handler api concept. This is not needed to get libpq up and going with built-in types (which is all we feel it should be responsible for). the handler API can always be added later w/o changing existing functions ... have to add a couple though. andrew merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PGparam proposal
your proposal completely glossed over the issue of exactly what data structure would be exposed to clients for anything more complex than an integer Yeah. Forgot to include the below in the proposal and the last email. Here is the lastest list for complex types. Most have been around since the last 0.5 patch. Still need numeric, date, time and a couple others. typedef struct { double x; double y; } PGpoint; typedef struct { PGpoint pts[2]; } PGlseg; typedef struct { PGpoint high; PGpoint low; } PGbox; typedef struct { PGpoint center; double radius; } PGcircle; /* When used with PQgetf, 'pts' must be freed with PQfreemem(). */ typedef struct { int npts; int closed; PGpoint *pts; } PGpath; /* When used with PQgetf, 'pts' must be freed with PQfreemem(). */ typedef struct { int npts; PGpoint *pts; } PGpolygon; /* This struct works with CIDR as well. */ typedef struct { /* here for convenience, value the same as first 2 bytes of sa_buf */ unsigned short sa_family; /* mask, ie. /24 */ int mask; int is_cidr; /* Cast to: sockaddr, sockaddr_in, sockaddr_in6, sockaddr_stroage */ int sa_len; char sa_buf[128]; } PGinet; typedef struct { int a; int b; int c; int d; int e; int f; } PGmacaddr; /* main problem with this type is that it can be a double * or int64 and that is a compile-time decision. This means * the client has a 50% chance of getting it wrong. It would * be nice if the server included an indicater or converted * the external format to one or the other. OR, make client * aware of server's timestamp encoding when it connects. */ typedef struct { /* When non-zero, this is a TIMESTAMP WITH TIME ZONE. When zero, * this is a TIMESTAMP WITHOUT TIME ZONE. When WITHOUT, gmtoff * will always be 0 and tzn will be GMT. */ int withtz; /* binary timestamp from server (in host order). If haveint64 is * non-zero, use the 'ts.asint64' value otherwise use 'ts.asdouble'. */ int haveint64; union { struct { unsigned int a; signed int b; } asint64; double asdouble; } ts; /* microseconds */ int usec; /* GMT offset, some systems don't have this in struct tm */ int gmtoff; /* time zone name, some systems don't have this in struct tm */ char *tzn; /* broken-down time */ struct tm tm; } PGtimestamp; /* still working on this, may need access macros */ typedef struct { int dim; int lbound; } PGarraydim; typedef struct { int len; /* already in PGtype format. For instance: * (PGpolygon *)arr-items[i].data * or * printf(text=%s\n, arr-items[i].data); * * Could have a union of all types here but that * doesn't help much for 3rd party types. */ char *data; } PGarrayitem; typedef struct { Oid oid; /* type of items[].data */ int ndim; PGarraydim dims[MAXDIM]; int nitems; PGarrayitem *items; } PGarray; andrew merlin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] PGparam proposal v2
Here is our updated PGparam extension to the libpq api: http://archives.postgresql.org/pgsql-hackers/2007-12/msg00356.php We have a patch implementing the following which we are cleaning up. We are also kicking around some ideas for arrays and possibly composite types which we may consider if the community wants to move forwards with this proposal. Tom made a number of comments some of which we have addressed: *) Separate PGparam from PGconn: we agree with this and separated them. *) Chanages to existing API functions: we agreed and moved new behavior to new functions *) 3rd party types: we now support this through a type registration interface *) Internal type changes: We think changes to binary format are fairly rare and easily addressed. *) Type confusion was removed by giving each type its own specifier. *) Objections to printf: We agreed in part: we moved to natural names, from %n4 to %pgint for example. This addressed scalability concerns and should be less cryptic to use. *) Argument passing in putf and getf is identical to the previous proposal. All we changed was the naming schema for the %spec and putf now takes a PGparam rather than a PGconn. * API INTERFACE /* opqaue */ typedef struct pg_param PGparam; PGparam *PQparamCreate(PGconn *conn); /* manually reset a param struct. This is done by * all execution functions for you. */ void PQparamReset(PQparam *param) /* free a PGparam */ void PQparamClear(PQparam *param); int PQputf( PGparam *param, const char *typeSpec, ...); int PQgetf( const PGresult *res, int tup_num, const char *fieldSpec, ...); /* PGparam Execution Functions */ PGresult *PQparamExec( PGconn *conn, PGparam *param, const char *command, int resultFormat); int PQparamSendQuery( PGconn *conn, PGparam *param, const char *command, int resultFormat); PGresult *PQparamExecPrepared PGconn *conn, PGparam *param, const char *stmtName, int resultFormat); int PQparamSendQueryPrepared PGconn *conn, PGparam *param, const char *stmtName, int resultFormat); /* All in wonder, no PGparam needed */ PGresult *PQexecParamsf( PGconn *conn, const char *commandSpec, int resultFormat, ...); /* All in wonder, no PGparam needed */ int PQsendQueryParamsf( PGconn *conn, const char *commandSpec, int resultFormat, ...); /* All in wonder, no PGparam needed */ PGresult *PQexecPreparedf( PGconn *conn, const char *stmtName, const char *typeSpec, int resultFormat, ...); /* All in wonder, no PGparam needed */ int PQsendQueryPreparedf( PGconn *conn, const char *stmtName, const char *typeSpec, int resultFormat, ...); /* gets the PGparam error message */ char *PQparamErrorMessage(const PGparam *param); * TYPE ALIAS SPECIFIERS The convention for postgresql built-in types is a % followed by the type alias. Every pgtype begins with pg. For example: %pgint4 %pgpolygon %pgbox 3rd party types can register their own specifiers, which is discussed int the TYPE HANDLER SYSTEM section. Type aliases must be unique. * TYPE HANDLER SYSTEM typedef struct pg_typeputargs { /* The out buffer will be at least 16K. If more room is needed, * use the PQ_TYPE_SETOUT to grow the buffer. In most cases, * 16K is plenty of room. */ char *out; /* the size in bytes of the out buffer */ int outl; /* Should not use directly, see PQ_TYPE_SETOUT. For the brave, * set to 1 if you point the out buffer at memory that should be * freed after your put callback returns. */ int free_out; /* The arguments to putf. Use PQ_TYPE_NEXTARG. */ va_list *ap; /* The type's alias name, like 'pgint8'. */ const char *type_alias; /* Sets an error message. This msg shows up in * PQparamErrorMessage(). */ int (*seterr)(struct pg_typeputargs *args, const char *format, ...); } PGtypePutArgs; typedef struct pg_typegetargs { const PGresult *res; int tup_num; int field_num; /* pointer to the output of PQgetvalue for this tup+field */ char *value; /* The arguments to getf. Use PQ_TYPE_NEXTARG. NOTE: the field_num * supplied to getf has already been pulled out of the va_list and * assigned to this structs field_num member. */ va_list *ap; /* The type's alias name, like 'pgint8'. */ const char *type_alias; /* Sets an error message. This msg shows up in * PQresultErrorMessage(). */ int (*seterr)(struct pg_typegetargs *args, const char *format, ...); } PGtypeGetArgs; #define PQ_TYPE_NEXTARG(typeArgs, type) va_arg(*(typeArgs)-ap, type) /* makes sure that putArgs-out is larger enough for new_outl */ #define PQ_TYPE_SETOUT(putArgs, new_outl) do{ \ if((new_outl) (putArgs)-outl) \ { \ (putArgs)-out = (char *)malloc(new_outl); \ if(!(putArgs)-out) \ return -1; \ *(putArgs)-out = 0; \ (putArgs)-outl = (new_outl); \ (putArgs)-free_out = 1; \ } \ } while(0) /* * Returns - the number of bytes put or -1 for error. */
Re: [HACKERS] binary array and record recv
PQlookupOid(PGconn *conn, char **type_names, Oid *oids, int count); We are backing away from this (not such a great idea). We are actually working hard at removing Oid dependencies from our PGparam idea. We think it is more generic to make the server allow InvalidOid for composites and array elmtypes, as Oids can change from server to server. andrew merlin eSilo ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] binary array and record recv
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: Both array and record binary recv functions require that the recv'd Oid match the Oid of what the backend has deduced. We don't think this behavior gets you very much. Other than the ability to detect errors before the code goes off into the weeds, you mean? regards, tom lane When dealing with binary, the Oid the client sends may match what the server thinks but the data is wrong (client sent binary formatted data of the wrong type). Thus, the only real check we saw was on the data length (which is rolling the dice). Plus for non-array and non-composite types, specifying Oid is optional. So why is it not optional for arrays/composites? How is the client supposed to send back composite types without having a meaningful way to get the Oids from the server? Either you have to be flexible and trust what the client sends or you have to make it possbile for the client to get the proper information. Third option is to not allow client to send binary composite/array types at all. andrew merlin eSilo ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] query : max size Bytea
Arunachalam Parthasarathy wrote: hi, i would request you to answer the following query: 1. What is the maximum size of data single Bytea entry in column? i would be thankful to you for the above request . P.S : my current reqirement is to add more that 1 gb of data in a single entry of type bytea column in each row. Thanks and regards Arunachalam ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org bytea uses up to 4 bytes to represent the binary string length (signed 32-bit integer). So the max data size is 2G. http://www.postgresql.org/docs/8.3/static/datatype-binary.html andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] query : max size Bytea
Andrew Chernow wrote: Arunachalam Parthasarathy wrote: hi, i would request you to answer the following query: 1. What is the maximum size of data single Bytea entry in column? i would be thankful to you for the above request . P.S : my current reqirement is to add more that 1 gb of data in a single entry of type bytea column in each row. Thanks and regards Arunachalam ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org bytea uses up to 4 bytes to represent the binary string length (signed 32-bit integer). So the max data size is 2G. http://www.postgresql.org/docs/8.3/static/datatype-binary.html andrew Sorry, my mistake. Max size is 1G. oops. andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] timetz range check issue
When inserting a timetz in binary mode, there are no range checks on the time value (nor on the zone). In text mode, things are fine: postgres=# insert into t values ('24:00:00.01-05'::timetz); ERROR: date/time field value out of range: 24:00:00.01-05 // 24:00:00.01-05 double d = ((24 * MINS_PER_HOUR + 0) * SECS_PER_MINUTE) + 0 + 0.01); postgres=# select * from t; val 24:00:00.01-05 It displays it correctly, but it doesn't seem like proper behavior. If I insert 25:00:00.00-05, I get nothing: postgres=# select * from t; val - (1 row) QUERY-LOG: LOG: execute unnamed: INSERT INTO timetz_test VALUES ($1) DETAIL: parameters: $1 = 'p£ ' I get no error from libpq. I think a range check is needed in timetz_recv time_recv. version: (interger_datetimes=off) PostgreSQL 8.3devel on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-52) Andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] timetz range check issue
Tom Lane wrote: range-checks are present only where needed for the backend to defend itself Survival is very important, but so is maintaining data integrity. IMHO, data validation should be as consistent as possible. If the backend refuses data on one hand but allows it on the other, confusion sets in. I realize that binary input can't always be 100% validated, but a best effort is good form. It looks like most recv funcs already have checks, where a check is somewhat meaningful. andrew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] binary array and record recv
Tom Lane wrote: Andrew Chernow [EMAIL PROTECTED] writes: When dealing with binary, the Oid the client sends may match what the server thinks but the data is wrong (client sent binary formatted data of the wrong type). Thus, the only real check we saw was on the data length (which is rolling the dice). Yes, the available checks to detect client-vs-server datatype mismatch are very weak, but that hardly comes across as a good argument for removing one of the ones we have. How is the client supposed to send back composite types without having a meaningful way to get the Oids from the server? On what grounds do you claim that it can't get the type Oids from the server? regards, tom lane I was wondering if anyone would be opposed to a small allowance here. When trying to send a NULL value for a record/composite column, is it possible to simply ignore the recv'd oid? Currently, the oid of each column is read and checked against the server. If the oids don't match, an error is raised. I see Tom's take on this, a weak check is better than no check at all. But when passing in a NULL value, setting length to -1, I'm not sure this check even registers as a weak one. Maybe: for each record column 1. read the oid 2. read the len 3. if oid != server_oid len!=-1 error out, wrong data type andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] Proposal - libpq Type System beta-0.8a (was PGparam)
We previously tried to send this proposal/patch, but it never showed up. We tried twice in a 16 hour period. It might be something with the attachment, who knows. This time, we are providing a URL instead of attaching the patch. Please disregard previous emails if they come back from never-never-land. http://www.esilo.com/projects/postgresql/libpq/typesys-beta-0.8a.tar.gz What was previously called PGparam is now called libpq Type System. PGparam is just one structure within the Type System. This is an updated proposal/patch for a Type System in libpq. Basically, it allows applications to send binary formatted paramters put and receive text or binary formatted results get through a printf-style interface. It also adds the ability to register user-defined types, sub-classes of existing types as well as composites. There is full support for arrays, composites, composite arrays and nested composites. There are four documents included within the tar: type-system-api.txt - Documents the API functions type-specifiers.txt - This explains the type specifier syntax (printf-style). It also documents how to construct parameters and get result values for every supported type. composites-arrays.txt - Documents the use of arrays, composites and arrays of composites. This proposed API has full support for nested arrays or composites. type-handlers.txt - Explains how to register and implement a libpq type handler. Types can be sub-classed. The patch is named typesys.patch. There is a regression test named regression-test.c and a makefile for it named makefile.typesys. Andrew Merlin eSilo ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Proposal - libpq Type System beta-0.8a (was PGparam)
Markus Schiltknecht wrote: Hi, Andrew Chernow wrote: It might be something with the attachment, who knows. Most probably that was the case, yes. The -hackers list is limited, please use -patches to send patches. ;-) Regards Markus Noted. In our case, its a little ambiguos whether -hackers or -patches is the correct place. We are really posting a proposal that happens to have a working implementation. We are looking for feedback and/or a discussion. andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Proposal - libpq Type System beta-0.8a (was PGparam)
Andrew Dunstan wrote: Andrew Chernow wrote: Markus Schiltknecht wrote: Hi, Andrew Chernow wrote: It might be something with the attachment, who knows. Most probably that was the case, yes. The -hackers list is limited, please use -patches to send patches. ;-) Regards Markus Noted. In our case, its a little ambiguos whether -hackers or -patches is the correct place. We are really posting a proposal that happens to have a working implementation. We are looking for feedback and/or a discussion. You should split it and send the proposal to -hackers. Ideally, you would have had a proposal discussed before you wrote a line of code. cheers andrew proposal discussed before you wrote a line of code Yeah, we realize that. In our situation, we use this code internally which is why it exists. Back in Aug 2007, we packaged it up and proposed it because we thought it would be useful to others. Since then, we have submitted several versions. Feedback was minimal. In fact, only Tom has made suggestions, which we have taken into consideration and adjusted the spec accordingly. We are interested in having a discussion about the beta-0.8a proposal and concept, not the implementation or submitting procedures. We provided the code in case someone wants to take a test drive. Andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Proposal - libpq Type System beta-0.8a (was PGparam)
Tom Lane wrote: Merlin Moncure [EMAIL PROTECTED] writes: That said, we have been a little dismayed in the lack of comment. Personally, I'm ignoring this thread, as well as the one on partitioning, because I'm up to my arse in 8.3-release alligators. I'm going to try hard not to think about any 8.4 development issues until 8.3 is actually out the door and we branch for 8.4 development. I can't say how many other people are equally pressed for time ... but seeing the minimal attention that seems to be getting paid to open 8.3 issues, it doesn't look to me like the community as a whole has a lot of spare cycles right now. regards, tom lane Thanks for the heads up. We had a feeling the 8.3 crunch was a factor. We don't want to slow that down because we were are waiting for 8.3 to perform yet another mysql to postgresql migration :) andrew ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Postgresql Materialized views
I think, though, that telling them that they must work on a certain feature, because that's what the users are asking for, is the wrong approach. Not to say that is exactly what you are requesting, but I suggest that is where you are leading. Cheers, mark The more communication between users and developers the better. Companies would pay a fortune for the user complaints, moans and groans, suggestions, bug reports and demands that postgresql sees in a single day. IMHO developers need reality checks ... well so do users but for different reasons. There is no quicker way to kill a product or project than to ignore the needs/wants of the user/customer base (open-source or not). andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Commitfest status
Bruce Momjian wrote: Alvaro Herrera wrote: Merlin Moncure escribi?: ISTM if we move to a 'wiki style' patch management, or something more formal like a bug tracker the work becomes more decentralized and the patch developer becomes more involved in keeping the patch list up to date with the latest stuff. I think the wiki, being a more organic type of approach. is maybe a better fit for postgresql community style, and there is still a lot of 'plumbing' work to do. Right. If the submitter thinks that the reviewer is going to need to have a look at the relevant email archives, he can post a link to the archives in the discussion web page for the patch. True, but what are the odds that is going to happen. We have trouble getting context diffs. I know merlin and I allocated a lot of resources towards our patch. If getting reviewed required us to jump through some hoops (like being repsonsible for updating a wiki), then so be it. If someone doesn't follow the patch submission rules, then the patch can't be reviewed as it is not within the proper state ... not a punishment just a patch not meeting review requirements. What are the requirements of a patch submission, don't know. One thing is for sure, the patch submitter is probably the most familiar with it so should be involved at some level of review preparation. This distributes the work and attempts to make patches more consistent when reviewed. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest process
Heikki Linnakangas wrote: The main point of my proposal is: let's make the *authors* who want their stuff to be reviewed as part of a commitfest do the extra work. There would be no extra work required for patch reviewers. I think this makes the most sense. It distributes the work to authors who know the most about the patch/feature and have probably followed all discussions related to it. Updating a wiki or something similar is a brainless activity. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Commitfest process
This is reasonable for the sort of medium-to-large patch that the author has put a lot of time into. But we also get a lot of small one-off patches where it's not so reasonable. Now of course many of those get applied right away, but not all. Just a thought... maybe a distinction should be made between quick-fix patches and things that would require more review due to behavior changes or new features. Seems reasonable to treat something like HOT and a one line patch differently. Not sure where the magic line would be drawn: size of patch, is it a bug fix or a new feature, requires extensive testing (more time), etc... I do think it is overkill to have a wiki entry for a one line patch with a 30 minute life-span (either thrown out or applied). -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers