Re: [PATCHES] WIP patch for tuple freezing issues
Tom Lane wrote: Attached is a draft patch for the WAL-and-freezing issues we've been discussing. This incorporates Heikki and Simon's work on providing WAL-logging for tuple freezing actions and pg_clog truncation respectively, and adds on several other things: Looks good. Just a few notes: The patch seems to make VACUUM FULL FREEZE combination valid again, which should be note in the docs. This comment in vacuum.c: /* * If we froze any tuples, mark the buffer dirty, and write a WAL * record recording the changes. We must log the changes to be * crash-safe if we truncate clog at conclusion of the vacuum. */ is not accurate. As discussed earlier, a crash is a problem even if clog is not truncated by this vacuum, because relfrozenxid is updated anyway. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] Bug in WAL backup documentation
On Sat, 2006-11-04 at 13:29 -0500, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: On Fri, Nov 03, 2006 at 11:25:09AM -0500, Tom Lane wrote: Since 8.1 has done this all along and no one's actually complained about it, I guess no one is using scripts that do cd. I'm inclined to go with Bernd's suggestion to change the docs to match the code, but does anyone have a contrary opinion? +1 Doc bug for 8.2, feature request for 8.3, unless Windows bites. Looking back in the archives, I note that one of the arguments for making the server use relative paths everywhere was so that it'd be robust against things like DBAs moving directories that contain live postmasters. If we provide a %P option, or otherwise encourage people to write scripts that depend on the absolute path of $PGDATA, we'd lose some of this robustness. So that might be an argument for leaving the code as-is indefinitely ... not a very strong argument maybe, but it's more than just we're-too-lazy-to-add-%P. Anyway, I've corrected the documentation in HEAD and 8.1. I think I can fulfil Bernd, Florian and Martijn's wishes by supplying an additional substitutable parameter %d which is replaced by the DataDir. This allows people to use an absolute directory if they wish, allows us to continue with the functionality of %p as-is and all without a possible confusion between %p and %P. It also allows %d to be used as an identifier which might be used to locate the appropriate archive for those with multiple servers without editing the archive_command for each of those servers. So using %d/%p will give you the absolute path for forward-slashers. Works for archive and recovery. Patch included, code and docs. Code comments now discuss relative paths also. Comments? -- Simon Riggs EnterpriseDB http://www.enterprisedb.com Index: doc/src/sgml/backup.sgml === RCS file: /projects/cvsroot/pgsql/doc/src/sgml/backup.sgml,v retrieving revision 2.93 diff -c -r2.93 backup.sgml *** doc/src/sgml/backup.sgml 4 Nov 2006 18:20:27 - 2.93 --- doc/src/sgml/backup.sgml 5 Nov 2006 14:44:34 - *** *** 521,527 any literal%p/ is replaced by the path name of the file to archive, while any literal%f/ is replaced by the file name only. (The path name is relative to the working directory of the server, ! i.e., the cluster's data directory.) Write literal%%/ if you need to embed an actual literal%/ character in the command. The simplest useful command is something like --- 521,528 any literal%p/ is replaced by the path name of the file to archive, while any literal%f/ is replaced by the file name only. (The path name is relative to the working directory of the server, ! i.e., the cluster's data directory, which can also be specified ! using %d, if required) Write literal%%/ if you need to embed an actual literal%/ character in the command. The simplest useful command is something like *** *** 599,605 In writing your archive command, you should assume that the file names to be archived may be up to 64 characters long and may contain any combination of ASCII letters, digits, and dots. It is not necessary to ! remember the original full path (literal%p/) but it is necessary to remember the file name (literal%f/). /para --- 600,606 In writing your archive command, you should assume that the file names to be archived may be up to 64 characters long and may contain any combination of ASCII letters, digits, and dots. It is not necessary to ! remember the original relative path (literal%p/) but it is necessary to remember the file name (literal%f/). /para *** *** 919,925 replaced by the name of the desired log file, and literal%p/, which is replaced by the path name to copy the log file to. (The path name is relative to the working directory of the server, ! i.e., the cluster's data directory.) Write literal%%/ if you need to embed an actual literal%/ character in the command. The simplest useful command is something like --- 920,927 replaced by the name of the desired log file, and literal%p/, which is replaced by the path name to copy the log file to. (The path name is relative to the working directory of the server, ! i.e., the cluster's data directory, which can also be specified ! using %d, if required.) Write literal%%/ if you need to embed an actual literal%/ character in the command. The simplest useful command is something like *** *** 1010,1016 and any literal%p/ is replaced by the path name to copy it to on the server. (The path name is relative to the working directory of the server, ! i.e.,
Re: [PATCHES] WIP patch for tuple freezing issues
Heikki Linnakangas [EMAIL PROTECTED] writes: The patch seems to make VACUUM FULL FREEZE combination valid again, which should be note in the docs. Right, I haven't gotten around to fixing the VACUUM ref page yet but this change is needed. This is really fallout from Alvaro's previous changes BTW: there is no longer any need for VACUUM FREEZE to try to absolutely guarantee a fully-frozen table state. All it is really doing is guaranteeing there's nothing older than OldestXmin in there, and so the fact that VACUUM FULL would put its own xid into the table isn't a contradiction anymore. BTW, I intend to say that FREEZE is deprecated in favor of the equivalent of running VACUUM with vacuum_freeze_distance set to 0. I'd like to remove it in 8.3 so as to get rid of one non-spec reserved word. Any objections? This comment in vacuum.c: /* * If we froze any tuples, mark the buffer dirty, and write a WAL * record recording the changes. We must log the changes to be * crash-safe if we truncate clog at conclusion of the vacuum. */ is not accurate. OK, how about ... crash-safe against future truncation of clog.? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] Bug in WAL backup documentation
Simon Riggs [EMAIL PROTECTED] writes: On Sat, 2006-11-04 at 13:29 -0500, Tom Lane wrote: Looking back in the archives, I note that one of the arguments for making the server use relative paths everywhere was so that it'd be robust against things like DBAs moving directories that contain live postmasters. If we provide a %P option, or otherwise encourage people to write scripts that depend on the absolute path of $PGDATA, we'd lose some of this robustness. I think I can fulfil Bernd, Florian and Martijn's wishes by supplying an additional substitutable parameter %d which is replaced by the DataDir. This fails to respond to the concern that DataDir might be out-of-date. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] Bug in WAL backup documentation
On Sun, 2006-11-05 at 11:10 -0500, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: On Sat, 2006-11-04 at 13:29 -0500, Tom Lane wrote: Looking back in the archives, I note that one of the arguments for making the server use relative paths everywhere was so that it'd be robust against things like DBAs moving directories that contain live postmasters. If we provide a %P option, or otherwise encourage people to write scripts that depend on the absolute path of $PGDATA, we'd lose some of this robustness. I think I can fulfil Bernd, Florian and Martijn's wishes by supplying an additional substitutable parameter %d which is replaced by the DataDir. This fails to respond to the concern that DataDir might be out-of-date. I'm not suggesting that the option is necessary, but I am suggesting offering it to those who consider it useful. Let's allow it, but document the concern about its use in certain circumstances. I'm pretty sure most people don't move live postmasters very frequently, plus it isn't clear to me why we should support the people that want that to do that, yet not the people who want the absolute-path option. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] Bug in WAL backup documentation
Simon Riggs [EMAIL PROTECTED] writes: I'm pretty sure most people don't move live postmasters very frequently, plus it isn't clear to me why we should support the people that want that to do that, yet not the people who want the absolute-path option. As already discussed upthread, anyone who wants the path can get it from `pwd` or local equivalent --- and that mechanism is robust (as long as the directory move doesn't happen while any particular instance of the script is running). I don't see why we should go out of our way to provide a bad substitute for pwd. BTW, I note that some post-startup uses of DataDir have crept back in, in places like utils/adt/dbsize.c. I'll be sure to clean those up before release... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] Bug in WAL backup documentation
On Sun, 2006-11-05 at 11:49 -0500, Tom Lane wrote: I don't see why we should go out of our way to provide a bad substitute for pwd. That argument is conclusive. Agreed. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[PATCHES] WIP 2 interpreters for plperl
I have made some progress with what I think is needed to have two interpreters for plperl. This is a lot harder than the pltcl case for two reasons: 1. there are no restrictions on having 2 tcl interpreters, and 2. tcl does not need to save and restore context as we have to do with perl. I think I have a conceptual siolution to these two problems, but what I have is currently segfaulting somewhat myteriously. Tracing a dynamically loaded library in a postgres backend with a debugger is less than fun, too. I am attaching what I currently have, liberally sprinkled with elog(NOTICE) calls as trace writes. I need to get some other work done today too, so I will return to this later if I can. In the meanwhile, if anybody cares to cast a fresh set of eyeballs over this, please do. cheers andrew Index: plperl.c === RCS file: /cvsroot/pgsql/src/pl/plperl/plperl.c,v retrieving revision 1.121 diff -c -r1.121 plperl.c *** plperl.c19 Oct 2006 18:32:47 - 1.121 --- plperl.c5 Nov 2006 20:27:32 - *** *** 27,32 --- 27,33 #include utils/lsyscache.h #include utils/memutils.h #include utils/typcache.h + #include utils/hsearch.h /* perl stuff */ #include plperl.h *** *** 55,60 --- 56,69 SV *reference; } plperl_proc_desc; + /* hash table entry for proc desc */ + + typedef struct plperl_proc_entry + { + char proc_name[NAMEDATALEN]; + plperl_proc_desc *proc_data; + } plperl_proc_entry; + /* * The information we cache for the duration of a single call to a * function. *** *** 82,94 Oid*argtypioparams; } plperl_query_desc; /** * Global data **/ static bool plperl_safe_init_done = false; ! static PerlInterpreter *plperl_interp = NULL; ! static HV *plperl_proc_hash = NULL; ! static HV *plperl_query_hash = NULL; static bool plperl_use_strict = false; --- 91,128 Oid*argtypioparams; } plperl_query_desc; + /* hash table entry for query desc */ + + typedef struct plperl_query_entry + { + char query_name[NAMEDATALEN]; + plperl_query_desc *query_data; + } plperl_query_entry; + /** * Global data **/ + + typedef enum + { + INTERP_NONE, + INTERP_HELD, + INTERP_TRUSTED, + INTERP_UNTRUSTED, + INTERP_BOTH + } InterpState; + + static InterpState interp_state = INTERP_NONE; + static bool can_run_two = false; + static bool plperl_safe_init_done = false; ! static PerlInterpreter *plperl_trusted_interp = NULL; ! static PerlInterpreter *plperl_untrusted_interp = NULL; ! static PerlInterpreter *plperl_held_interp = NULL; ! static bool can_run_two; ! static bool trusted_context; ! static HTAB *plperl_proc_hash = NULL; ! static HTAB *plperl_query_hash = NULL; static bool plperl_use_strict = false; *** *** 144,149 --- 178,184 { /* Be sure we do initialization only once (should be redundant now) */ static bool inited = false; + HASHCTL hash_ctl; if (inited) return; *** *** 157,162 --- 192,213 EmitWarningsOnPlaceholders(plperl); + MemSet(hash_ctl, 0, sizeof(hash_ctl)); + + hash_ctl.keysize = NAMEDATALEN; + hash_ctl.entrysize = sizeof(plperl_proc_entry); + + plperl_proc_hash = hash_create(PLPerl Procedures, + 32, + hash_ctl, + HASH_ELEM); + + hash_ctl.entrysize = sizeof(plperl_query_entry); + plperl_query_hash = hash_create(PLPerl Queries, + 32, + hash_ctl, + HASH_ELEM); + plperl_init_interp(); inited = true; *** *** 235,240 --- 286,381 elog(ERROR,'trusted Perl functions disabled - \ please upgrade Perl Safe module to version 2.09 or later');}]); } + #define TEST_FOR_MULTI \ + use Config; \ + $Config{usemultiplicity} eq 'define' or \ + ($Config{usethreads} eq 'define' \ +and $Config{useithreads} eq 'define') + + + / + * + * We start out by creating a held interpreter that we can use in + * trusted or
Re: [PATCHES] Writing WAL for relcache invalidation:pg_internal.init
On Wed, 2006-11-01 at 12:05 -0500, Tom Lane wrote: I think we're probably better off to just forcibly remove the init file during post-recovery cleanup. The easiest place to do this might be BuildFlatFiles, which has to scan pg_database anyway ... Patch enclosed. Clean apply to HEAD, make check OK, plus restart check. No specific PITR test, since same code path. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com Index: src/backend/utils/cache/inval.c === RCS file: /projects/cvsroot/pgsql/src/backend/utils/cache/inval.c,v retrieving revision 1.78 diff -c -r1.78 inval.c *** src/backend/utils/cache/inval.c 4 Oct 2006 00:30:00 - 1.78 --- src/backend/utils/cache/inval.c 5 Nov 2006 22:00:03 - *** *** 767,776 SendSharedInvalidMessage(msg); break; case TWOPHASE_INFO_FILE_BEFORE: ! RelationCacheInitFileInvalidate(true); break; case TWOPHASE_INFO_FILE_AFTER: ! RelationCacheInitFileInvalidate(false); break; default: Assert(false); --- 767,776 SendSharedInvalidMessage(msg); break; case TWOPHASE_INFO_FILE_BEFORE: ! RelationCacheInitFileInvalidate(true, DatabasePath); break; case TWOPHASE_INFO_FILE_AFTER: ! RelationCacheInitFileInvalidate(false, DatabasePath); break; default: Assert(false); *** *** 817,823 * unless we committed. */ if (transInvalInfo-RelcacheInitFileInval) ! RelationCacheInitFileInvalidate(true); AppendInvalidationMessages(transInvalInfo-PriorCmdInvalidMsgs, transInvalInfo-CurrentCmdInvalidMsgs); --- 817,823 * unless we committed. */ if (transInvalInfo-RelcacheInitFileInval) ! RelationCacheInitFileInvalidate(true, DatabasePath); AppendInvalidationMessages(transInvalInfo-PriorCmdInvalidMsgs, transInvalInfo-CurrentCmdInvalidMsgs); *** *** 826,832 SendSharedInvalidMessage); if (transInvalInfo-RelcacheInitFileInval) ! RelationCacheInitFileInvalidate(false); } else if (transInvalInfo != NULL) { --- 826,832 SendSharedInvalidMessage); if (transInvalInfo-RelcacheInitFileInval) ! RelationCacheInitFileInvalidate(false, DatabasePath); } else if (transInvalInfo != NULL) { Index: src/backend/utils/cache/relcache.c === RCS file: /projects/cvsroot/pgsql/src/backend/utils/cache/relcache.c,v retrieving revision 1.249 diff -c -r1.249 relcache.c *** src/backend/utils/cache/relcache.c 4 Oct 2006 00:30:00 - 1.249 --- src/backend/utils/cache/relcache.c 5 Nov 2006 22:00:06 - *** *** 3559,3570 * no backend has been started since the last removal. */ void ! RelationCacheInitFileInvalidate(bool beforeSend) { char initfilename[MAXPGPATH]; snprintf(initfilename, sizeof(initfilename), %s/%s, ! DatabasePath, RELCACHE_INIT_FILENAME); if (beforeSend) { --- 3559,3570 * no backend has been started since the last removal. */ void ! RelationCacheInitFileInvalidate(bool beforeSend, char *ForDatabasePath) { char initfilename[MAXPGPATH]; snprintf(initfilename, sizeof(initfilename), %s/%s, ! ForDatabasePath, RELCACHE_INIT_FILENAME); if (beforeSend) { Index: src/backend/utils/init/flatfiles.c === RCS file: /projects/cvsroot/pgsql/src/backend/utils/init/flatfiles.c,v retrieving revision 1.21 diff -c -r1.21 flatfiles.c *** src/backend/utils/init/flatfiles.c 14 Jul 2006 14:52:25 - 1.21 --- src/backend/utils/init/flatfiles.c 5 Nov 2006 22:00:07 - *** *** 36,41 --- 36,42 #include access/transam.h #include access/twophase_rmgr.h #include access/xact.h + #include catalog/catalog.h #include catalog/pg_auth_members.h #include catalog/pg_authid.h #include catalog/pg_database.h *** *** 47,52 --- 48,54 #include storage/pmsignal.h #include utils/builtins.h #include utils/flatfiles.h + #include utils/relcache.h #include utils/resowner.h *** *** 165,173 * * A side effect is to determine the oldest database's datminxid * so we can set or update the XID wrap limit. */ static void ! write_database_file(Relation drel) { char *filename, *tempname; --- 167,178 * * A side effect is to determine the oldest database's datminxid * so we can set or update the XID wrap limit. + * + * removeDBRelInitFile is true only at startup, to allow us to + * remove the relcache init file as we read each database */ static void ! write_database_file(Relation drel, bool removeDBRelInitFile) { char *filename, *tempname; *** *** 252,257 --- 257,278 fputs_quote(datname, fp); fprintf(fp, %u
Re: [PATCHES] WIP 2 interpreters for plperl
I wrote: I have made some progress with what I think is needed to have two interpreters for plperl. This is a lot harder than the pltcl case for two reasons: 1. there are no restrictions on having 2 tcl interpreters, and 2. tcl does not need to save and restore context as we have to do with perl. I think I have a conceptual siolution to these two problems, but what I have is currently segfaulting somewhat myteriously. Tracing a dynamically loaded library in a postgres backend with a debugger is less than fun, too. I am attaching what I currently have, liberally sprinkled with elog(NOTICE) calls as trace writes. With a little more perseverance I found the problem. The attached patch passes regression. But it now needs plenty of eyeballs and testing. cheers andrew Index: plperl.c === RCS file: /cvsroot/pgsql/src/pl/plperl/plperl.c,v retrieving revision 1.121 diff -c -r1.121 plperl.c *** plperl.c19 Oct 2006 18:32:47 - 1.121 --- plperl.c5 Nov 2006 22:20:16 - *** *** 27,32 --- 27,33 #include utils/lsyscache.h #include utils/memutils.h #include utils/typcache.h + #include utils/hsearch.h /* perl stuff */ #include plperl.h *** *** 55,60 --- 56,69 SV *reference; } plperl_proc_desc; + /* hash table entry for proc desc */ + + typedef struct plperl_proc_entry + { + char proc_name[NAMEDATALEN]; + plperl_proc_desc *proc_data; + } plperl_proc_entry; + /* * The information we cache for the duration of a single call to a * function. *** *** 82,94 Oid*argtypioparams; } plperl_query_desc; /** * Global data **/ static bool plperl_safe_init_done = false; ! static PerlInterpreter *plperl_interp = NULL; ! static HV *plperl_proc_hash = NULL; ! static HV *plperl_query_hash = NULL; static bool plperl_use_strict = false; --- 91,128 Oid*argtypioparams; } plperl_query_desc; + /* hash table entry for query desc */ + + typedef struct plperl_query_entry + { + char query_name[NAMEDATALEN]; + plperl_query_desc *query_data; + } plperl_query_entry; + /** * Global data **/ + + typedef enum + { + INTERP_NONE, + INTERP_HELD, + INTERP_TRUSTED, + INTERP_UNTRUSTED, + INTERP_BOTH + } InterpState; + + static InterpState interp_state = INTERP_NONE; + static bool can_run_two = false; + static bool plperl_safe_init_done = false; ! static PerlInterpreter *plperl_trusted_interp = NULL; ! static PerlInterpreter *plperl_untrusted_interp = NULL; ! static PerlInterpreter *plperl_held_interp = NULL; ! static bool can_run_two; ! static bool trusted_context; ! static HTAB *plperl_proc_hash = NULL; ! static HTAB *plperl_query_hash = NULL; static bool plperl_use_strict = false; *** *** 144,149 --- 178,184 { /* Be sure we do initialization only once (should be redundant now) */ static bool inited = false; + HASHCTL hash_ctl; if (inited) return; *** *** 157,162 --- 192,213 EmitWarningsOnPlaceholders(plperl); + MemSet(hash_ctl, 0, sizeof(hash_ctl)); + + hash_ctl.keysize = NAMEDATALEN; + hash_ctl.entrysize = sizeof(plperl_proc_entry); + + plperl_proc_hash = hash_create(PLPerl Procedures, + 32, + hash_ctl, + HASH_ELEM); + + hash_ctl.entrysize = sizeof(plperl_query_entry); + plperl_query_hash = hash_create(PLPerl Queries, + 32, + hash_ctl, + HASH_ELEM); + plperl_init_interp(); inited = true; *** *** 235,240 --- 286,375 elog(ERROR,'trusted Perl functions disabled - \ please upgrade Perl Safe module to version 2.09 or later');}]); } + #define TEST_FOR_MULTI \ + use Config; \ + $Config{usemultiplicity} eq 'define' or \ + ($Config{usethreads} eq 'define' \ +and $Config{useithreads} eq 'define') + + + / + * + * We start out by creating a held interpreter that we can use in + * trusted or untrusted mode (but
Re: [PATCHES] Writing WAL for relcache invalidation:pg_internal.init
Simon Riggs [EMAIL PROTECTED] writes: On Wed, 2006-11-01 at 12:05 -0500, Tom Lane wrote: I think we're probably better off to just forcibly remove the init file during post-recovery cleanup. The easiest place to do this might be BuildFlatFiles, which has to scan pg_database anyway ... Patch enclosed. Applied to HEAD and 8.1. It doesn't work in 8.0 though, because flatfiles.c doesn't exist. AFAIR there isn't any startup-time scan of pg_database at all before 8.1. Not sure if it's worth coming up with a whole new patch for that branch. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] ldap: fix resource leak
On Sat, 2006-11-04 at 23:34 -0500, Tom Lane wrote: Perhaps use a PG_TRY construct? At least for the existing code, this doesn't work well: the function exits early via ereport(LOG) and then return STATUS_ERROR;, so AFAICS there isn't an easy way to simplify the existing error handling logic via PG_TRY. Note that this is related to a more general problem: if *any* backend function allocates a resource that needs to be manually cleaned up, it probably ought to be using a PG_TRY block. Otherwise, the resource will be leaked on elog(ERROR). I wouldn't be surprised if various parts of the backend neglected to get this right. However, in this particular case, I didn't bother doing this, since it didn't seem likely that anything we're going to invoke will report errors via elog. One could make an argument for doing, for the sake of correctness/paranoia... -Neil ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] ldap: fix resource leak
Neil Conway [EMAIL PROTECTED] writes: On Sat, 2006-11-04 at 23:34 -0500, Tom Lane wrote: Perhaps use a PG_TRY construct? At least for the existing code, this doesn't work well: the function exits early via ereport(LOG) and then return STATUS_ERROR;, so AFAICS there isn't an easy way to simplify the existing error handling logic via PG_TRY. OK, no biggie. Note that this is related to a more general problem: if *any* backend function allocates a resource that needs to be manually cleaned up, it probably ought to be using a PG_TRY block. Otherwise, the resource will be leaked on elog(ERROR). I wouldn't be surprised if various parts of the backend neglected to get this right. For the most part we've tried to see to it that manual cleanup isn't required, although I agree that ldap_unbind doesn't seem worth having a tracking mechanism for. However, in this particular case, I didn't bother doing this, since it didn't seem likely that anything we're going to invoke will report errors via elog. One could make an argument for doing, for the sake of correctness/paranoia... In theory one could put START_CRIT_SECTION(); END_CRIT_SECTION(); around the code, as a form of Assert(no elog here). Not sure that this is actually a net win though, as a PANIC might well be considered a worse problem than a one-time leak of some LDAP state. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] ldap: fix resource leak
I wrote: ... Not sure that this is actually a net win though, as a PANIC might well be considered a worse problem than a one-time leak of some LDAP state. Come to think of it: either elog(ERROR) or a failure return from CheckLDAPAuth is going to lead directly to backend exit, so the whole thing is pretty much a cosmetic issue anyway. I don't object to adding the ldap_unbind calls, but I'd not recommend adding a large pile of code. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] ldap: fix resource leak
On Sun, 2006-11-05 at 19:28 -0500, Tom Lane wrote: Come to think of it: either elog(ERROR) or a failure return from CheckLDAPAuth is going to lead directly to backend exit, so the whole thing is pretty much a cosmetic issue anyway. Thanks for the feedback. Patch applied, with an additional fix (noticed some minor infelicities in the usage of snprintf()). -Neil ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [PATCHES] [GENERAL] ISO week dates
On 10/13/06, Guillaume Lelarge [EMAIL PROTECTED] wrote: Peter Eisentraut a écrit : There is an inconsistency here: 'IYYY' is the four-digit ISO year, 'IW' is the two-digit ISO week, but 'ID' would be the one-digit ISO day-of-the-week. I'm not sure we can fix that, but I wanted to point it out. Is there a two digit ISO day of the week ? If not, we should use ID. As you say, I don't know what we can do about that. I used Brendan Jurd's idea, perhaps he can tell us more on this matter. Thanks for your work so far Guillaume. I agree with Peter, it is inconsistent to have a one-digit field represented by a two-character code. However, I don't see a way around it. 'D' is already taken to mean the non-ISO day-of-week, and 'I' is taken to mean the last digit of the ISO year (although to be honest I don't see where this would be useful). This sort of thing is not unprecedented in to_char(). For example, the codes 'HH24' and 'HH12' are four characters long, but resolve to a two-digit result. 'DAY' resolves to nine characters, and so on. Basically I think we're stuck with ID for day-of-week and IDDD for day-of-year. I will take a look at implementing 'isoyear' for extract(), and also start putting together a patch for the documentation. If Guillaume is still interested in adding the IDDD field to to_char(), wonderful, if not I will pick up from his ID patch and add IDDD to it. Regards, BJ ---(end of broadcast)--- TIP 6: explain analyze is your friend