Re: [HACKERS] Crash dumps
Craig Ringer Thursday 07 of July 2011 01:05:48 > On 6/07/2011 11:00 PM, Radosław Smogura wrote: > > I think IPC for fast shout down all backends and wait for report > > processing is quite enaugh. > > How do you propose to make that reliable, though? > > -- > Craig Ringer > > POST Newspapers > 276 Onslow Rd, Shenton Park > Ph: 08 9381 3088 Fax: 08 9388 2258 > ABN: 50 008 917 717 > http://www.postnewspapers.com.au/ I want to add IPC layer to postgresql, few approches may be considerable, 1. System IPC 2. Urgent data on socket 3. Sockets (at least descriptors) + threads 4. Not portable, fork in segfault (I think forked process should start in segfault too). I actualy think for 3. sockets (on Linux pipes) + threads will be the best and more portable, for each backend PostgreSQL will open two channels urgent and normal, for each channel a thread will be spanned and this thread will just wait for data, backend will not start if it didn't connected to postmaster. Some security must be accounted when opening plain socket. In context of crash, segfault sends information on urgent channel, and postmaster kills all backends except sender, giving it time to work in segfault. Normal channels may, be used for scheduling some async operations, like read next n-blocks when sequence scan started. By the way getting reports on segfault isn't something "unusal", Your favorite software Java(tm) Virtual Machine does it. Regards, Radosław Smogura -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] spurious use of %m format in pg_upgrade
Peter Eisentraut writes: > pg_upgrade's pg_scandir_internal() makes use of the non-standard %m > format: > > pg_log(PG_FATAL, "could not open directory \"%s\": %m\n", dirname); > > Is this an oversight, or is there an undocumented assumption that this > code will only be used on platforms where %m works? Surely an oversight; everywhere else in frontend code, we take care to use strerror instead. Is there a way to persuade gcc to complain about such extensions when used in contexts where we don't know they work? > (Which platforms don't have scandir() anyway?) Hmmm ... my neolithic HPUX box has it, but OTOH the Open Group specs seem to have added it only in Issue 7 (2008), so I'd not want to bet money that any random Unix has got it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] spurious use of %m format in pg_upgrade
pg_upgrade's pg_scandir_internal() makes use of the non-standard %m format: pg_log(PG_FATAL, "could not open directory \"%s\": %m\n", dirname); Is this an oversight, or is there an undocumented assumption that this code will only be used on platforms where %m works? (Which platforms don't have scandir() anyway?) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] 9.2 CF2: 20 days in
Hackers, Commitfest 2 continues to progress ... slowly. At this point, we have no hope of wrapping it up early; my best hope is to at least finish on time. Statistics: * 1/2 of patches are still pending development: 12 waiting on author, and 18 waiting for review. In addition, 7 patches are waiting for a committer. * 28 patches have been committed or returned, about 45%. In addition to nudging the reviewers who have not delivered (going out tommorrow) I really need reviewers for a few patches which have none. These are all difficult, complex patches to review: 1) Collect frequency statistics and selectivity estimation for arrays 2) Kaigai's remaining security patches 3) Robert's VXID and table locks patches. If you are a seasoned PostgreSQL hacker and available to review any of the above, please volunteer now. Also, if you are a patch author or reviewer, please check to see that the status of your patch is updated in the commitfest application. I may have missed a few messages due to the US holiday. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types, constructors, and the type system
On Wed, 2011-07-06 at 15:14 -0400, Tom Lane wrote: > > I ran into problems with that before... I think with the I/O functions. > > I don't think that's a problem here, but I thought I'd ask. > > I think it'd probably be all right to do that. The places where you > might find shortcuts being taken are where functions are called directly > by C code, such as I/O function calls --- but these constructors should > only ever get invoked from SQL queries, no? Perhaps index expressions/predicates as well (which are also fine). I was more worried about some case that I hadn't thought of. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make relation_openrv atomic wrt DDL
On Wed, Jul 06, 2011 at 08:35:55PM -0400, Robert Haas wrote: > On Wed, Jul 6, 2011 at 6:32 PM, Noah Misch wrote: > > While a mere "LOCK bar.x" is sufficient to get a clean cutover with respect > > to > > parsing, it fails to invalidate plans. To really cover all bases, you need > > some no-op action that invalidates "bar.x". For actual practical use, I'd > > recommend something like: > > > > BEGIN; > > ALTER TABLE bar.x RENAME TO x0; > > ALTER TABLE bar.x0 RENAME TO x; > > CREATE TABLE foo.x ... > > COMMIT; > > > > Probably worth making it more intuitive to DTRT here. > > Well, what would be really nice is if it just worked. Yes. > Care to submit an updated patch? Attached. I made the counter 64 bits wide, handled the nothing-found case per your idea, and improved a few comments cosmetically. I have not attempted to improve the search_path interposition case. We can recommend the workaround above, and doing better looks like an excursion much larger than the one represented by this patch. Thanks, nm diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index c9b1d5f..a345e39 100644 *** a/src/backend/access/heap/heapam.c --- b/src/backend/access/heap/heapam.c *** *** 975,1000 relation_openrv(const RangeVar *relation, LOCKMODE lockmode) { Oid relOid; ! /* !* Check for shared-cache-inval messages before trying to open the !* relation. This is needed to cover the case where the name identifies a !* rel that has been dropped and recreated since the start of our !* transaction: if we don't flush the old syscache entry then we'll latch !* onto that entry and suffer an error when we do RelationIdGetRelation. !* Note that relation_open does not need to do this, since a relation's !* OID never changes. !* !* We skip this if asked for NoLock, on the assumption that the caller has !* already ensured some appropriate lock is held. !*/ ! if (lockmode != NoLock) ! AcceptInvalidationMessages(); ! ! /* Look up the appropriate relation using namespace search */ ! relOid = RangeVarGetRelid(relation, false); /* Let relation_open do the rest */ ! return relation_open(relOid, lockmode); } /* --- 975,985 { Oid relOid; ! /* Look up and lock the appropriate relation using namespace search */ ! relOid = RangeVarLockRelid(relation, lockmode, false); /* Let relation_open do the rest */ ! return relation_open(relOid, NoLock); } /* *** *** 1012,1041 relation_openrv_extended(const RangeVar *relation, LOCKMODE lockmode, { Oid relOid; ! /* !* Check for shared-cache-inval messages before trying to open the !* relation. This is needed to cover the case where the name identifies a !* rel that has been dropped and recreated since the start of our !* transaction: if we don't flush the old syscache entry then we'll latch !* onto that entry and suffer an error when we do RelationIdGetRelation. !* Note that relation_open does not need to do this, since a relation's !* OID never changes. !* !* We skip this if asked for NoLock, on the assumption that the caller has !* already ensured some appropriate lock is held. !*/ ! if (lockmode != NoLock) ! AcceptInvalidationMessages(); ! ! /* Look up the appropriate relation using namespace search */ ! relOid = RangeVarGetRelid(relation, missing_ok); /* Return NULL on not-found */ if (!OidIsValid(relOid)) return NULL; /* Let relation_open do the rest */ ! return relation_open(relOid, lockmode); } /* --- 997,1011 { Oid relOid; ! /* Look up and lock the appropriate relation using namespace search */ ! relOid = RangeVarLockRelid(relation, lockmode, missing_ok); /* Return NULL on not-found */ if (!OidIsValid(relOid)) return NULL; /* Let relation_open do the rest */ ! return relation_open(relOid, NoLock); } /* diff --git a/src/backend/catalog/namespindex ce795a6..f75fcef 100644 *** a/src/backend/catalog/namespace.c --- b/src/backend/catalog/namespace.c *** *** 44,49 --- 44,51 #include "parser/parse_func.h" #include "storage/backendid.h" #include "storage/ipc.h" + #include "storage/lmgr.h" + #include "storage/sinval.h" #include "utils/acl.h" #include "utils/builtins.h" #include "utils/guc.h" *** *** 285,290 RangeVarGetRelid(const RangeVar *relation, bool failOK) --- 287,372 } /* + * RangeVarLockR
Re: [HACKERS] spinlock contention
On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas wrote: > On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: >> On Jun12, 2011, at 23:39 , Robert Haas wrote: >>> So, the majority (60%) of the excess spinning appears to be due to >>> SInvalReadLock. A good chunk are due to ProcArrayLock (25%). >> >> Hm, sizeof(LWLock) is 24 on X86-64, making sizeof(LWLockPadded) 32. >> However, cache lines are 64 bytes large on recent Intel CPUs AFAIK, >> so I guess that two adjacent LWLocks currently share one cache line. >> >> Currently, the ProcArrayLock has index 4 while SInvalReadLock has >> index 5, so if I'm not mistaken exactly the two locks where you saw >> the largest contention on are on the same cache line... >> >> Might make sense to try and see if these numbers change if you >> either make LWLockPadded 64bytes or arrange the locks differently... > > I fooled around with this a while back and saw no benefit. It's > possible a more careful test would turn up something, but I think the > only real way forward here is going to be to eliminate some of that > locking altogether. I did some benchmarking, on the 32-core system from Nate Boley, with pgbench -n -S -c 80 -j 80. With master+fastlock+lazyvxid, I can hit 180-200k TPS in the 32-client range. But at 80 clients throughput starts to fall off quite a bit, dropping down to about 80k TPS. So then, just for giggles, I inserted a "return;" statement at the top of AcceptInvalidationMessages(), turning it into a noop. Performance at 80 clients shot up to 210k TPS. I also tried inserting an acquire-and-release cycle on MyProc->backendLock, which was just as good. That seems like a pretty encouraging result, since there appear to be several ways of reimplementing SIGetDataEntries() that would work with a per-backend lock rather than a global one. I did some other experiments, too. In the hopes of finding a general way to reduce spinlock contention, I implemented a set of "elimination buffers", where backends that can't get a spinlock go and try to combine their requests and then send a designated representative to acquire the spinlock and acquire shared locks simultaneously for all group members. It took me a while to debug the code, and I still can't get it to do anything other than reduce performance, which may mean that I haven't found all the bugs yet, but I'm not feeling encouraged. Some poking around suggests that the problem isn't that spinlocks are routinely contended - it seems that we nearly always get the spinlock right off the bat. I'm wondering if the problem may be not so much that we have continuous spinlock contention, but rather than every once in a while a process gets time-sliced out while it holds a spinlock. If it's an important spinlock (like the one protecting SInvalReadLock), the system will quickly evolve into a state where every single processor is doing nothing but trying to get that spinlock. Even after the hapless lock-holder gets to run again and lets go of the lock, you have a whole pile of other backends who are sitting there firing of lock xchgb in a tight loop, and they can only get it one at a time, so you have ferocious cache line contention until the backlog clears. Then things are OK again for a bit until the same thing happens to some other backend. This is just a theory, I might be totally wrong... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
On Wed, Jul 6, 2011 at 5:06 PM, Alvaro Herrera wrote: > Excerpts from Robert Haas's message of mié jul 06 14:02:13 -0400 2011: >> On Wed, Jul 6, 2011 at 1:09 PM, Alvaro Herrera >> wrote: >> > Excerpts from Robert Haas's message of mié jul 06 12:40:39 -0400 2011: >> > >> >> This patch removes an impressive amount of boilerplate code and >> >> replaces it with something much more compact. I like that. In the >> >> interest of full disclosure, I suggested this approach to KaiGai at >> >> PGCon, so I'm biased: but even so, I'm pleasantly surprised by the >> >> amount of consolidation that appears possible here. >> > >> > Yeah. Myself, I love the fact that the dropmsgstrings thing is gone. I >> > wonder if the routine to obtain "foo doesn't exist, skipping" messages >> > could be replaced by judicious use of getObjectDescription. >> >> I've been told we don't want to go further in that direction for >> reasons of translatability. > > Well, you can split sentences using a colon instead of building them; > something like > errmsg("object cannot be found, skipping: %s", > getObjectDescription(object)) > which renders as > object cannot be found, skipping: table foobar > > Now people can complain that these messages are "worse" than the > originals which were more specific in nature, but I don't personally see > a problem with that. I dunno what's the general opinion though. I'm going to try to stay out of it, since I only use PostgreSQL in English... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make relation_openrv atomic wrt DDL
On Wed, Jul 6, 2011 at 6:32 PM, Noah Misch wrote: >> Maybe this is a stupid idea, but what about changing the logic so >> that, if we get back InvalidOid, we AcceptInvalidationMessages() and >> retry if the counter has advanced? ISTM that might cover the example >> you mentioned in your last post, where we fail to detect a relation >> that has come into existence since our last call to >> AcceptInvalidationMessages(). It would cost an extra >> AcceptInvalidationMessages() only in the case where we haven't found >> the relation, which (a) seems like a good time to worry about whether >> we're missing something, since users generally try not to reference >> nonexistent tables and (b) should be rare enough to be ignorable from >> a performance perspective. > > Agreed on all points. Good idea. That improves our guarantee from "any > client-issued command will see tables committed before its submission" to > "_any command_ will see tables committed before its _parsing_". In > particular, commands submitted using SPI will no longer be subject to this > source of déją vu. I, too, doubt that looking up nonexistent relations is a > performance-critical operation for anyone. > >> In the department of minor nitpicking, why not use a 64-bit counter >> for SharedInvalidMessageCounter? Then we don't have to think very >> hard about whether overflow can ever pose a problem. > > Overflow is fine because I only ever compare values for equality, and I use an > unsigned int to give defined behavior at overflow. However, the added cost of > a 64-bit counter should be negligible, and future use cases (including > external code) might appreciate it. No strong preference. Yeah, that's what I was thinking. I have a feeling we may want to use this mechanism in other places, including places where it would be nice to be able to assume that > has sensible semantics. >> It strikes me that, even with this patch, there is a fair amount of >> room for wonky behavior. For example, as your comment notes, if >> search_path = foo, bar, and we've previously referenced "x", getting >> "bar.x", the creation of "foo.x" will generate invalidation messages, >> but a subsequent reference - within the same transaction - to "x" will >> not cause us to read them. It would be nice to >> AcceptInvalidationMessages() unconditionally at the beginning of >> RangeVarGetRelid() [and then redo as necessary to get a stable >> answer], but that might have some performance consequence for >> transactions that repeatedly read the same tables. > > A user doing that should "LOCK bar.x" in the transaction that creates "foo.x", > giving a clean cutover. (I thought of documenting that somewhere, but it > seemed a tad esoteric.) In the absence of such a lock, an extra unconditional > call to AcceptInvalidationMessages() narrows the window in which his commands > parse as using the "wrong" table. However, commands that have already parsed > will still use the old table without interruption, with no particular bound on > when they may finish. I've failed to come up with a use case where the > narrower window for parse inconsistencies is valuable but the remaining > exposure is acceptable. There may very well be one I'm missing, though. > > While a mere "LOCK bar.x" is sufficient to get a clean cutover with respect to > parsing, it fails to invalidate plans. To really cover all bases, you need > some no-op action that invalidates "bar.x". For actual practical use, I'd > recommend something like: > > BEGIN; > ALTER TABLE bar.x RENAME TO x0; > ALTER TABLE bar.x0 RENAME TO x; > CREATE TABLE foo.x ... > COMMIT; > > Probably worth making it more intuitive to DTRT here. Well, what would be really nice is if it just worked. Care to submit an updated patch? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] Fix leaky-view problem, part 1
On Wed, Jul 06, 2011 at 10:25:12PM +0200, Kohei KaiGai wrote: > *** a/src/backend/commands/view.c > --- b/src/backend/commands/view.c > --- 227,257 > atcmd->def = (Node *) lfirst(c); > atcmds = lappend(atcmds, atcmd); > } > } > > /* > + * If optional parameters are specified, we must set options > + * using ALTER TABLE SET OPTION internally. > + */ > + if (list_length(options) > 0) > + { > + atcmd = makeNode(AlterTableCmd); > + atcmd->subtype = AT_SetRelOptions; > + atcmd->def = (List *)options; > + > + atcmds = lappend(atcmds, atcmd); > + } > + else > + { > + atcmd = makeNode(AlterTableCmd); > + atcmd->subtype = AT_ResetRelOptions; > + atcmd->def = (Node *) > list_make1(makeDefElem("security_barrier", > + > NULL)); > + } > + if (atcmds != NIL) > + AlterTableInternal(viewOid, atcmds, true); > + > + /* >* Seems okay, so return the OID of the pre-existing view. >*/ > relation_close(rel, NoLock);/* keep the lock! */ That gets the job done for today, but DefineVirtualRelation() should not need to know all view options by name to simply replace the existing list with a new one. I don't think you can cleanly use the ALTER TABLE SET/RESET code for this. Instead, compute an option list similar to how DefineRelation() does so at tablecmds.c:491, then update pg_class. > 2011/7/5 Noah Misch : > > On Sun, Jul 03, 2011 at 11:33:38AM +0200, Kohei KaiGai wrote: > >> --- a/src/test/regress/sql/select_views.sql > >> +++ b/src/test/regress/sql/select_views.sql > >> +-- cleanups > >> +DROP ROLE IF EXISTS alice; > >> +DROP FUNCTION IF EXISTS f_leak(text); > >> +DROP TABLE IF EXISTS credit_cards CASCADE; > > > > Keep the view around. That way, pg_dump tests of the regression database > > will > > test the dumping of this view option. (Your pg_dump support for this > > feature > > does work fine, though.) The latest version of part 1 still drops everything here. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Crash dumps
On 6/07/2011 11:00 PM, Radosław Smogura wrote: I think IPC for fast shout down all backends and wait for report processing is quite enaugh. How do you propose to make that reliable, though? -- Craig Ringer POST Newspapers 276 Onslow Rd, Shenton Park Ph: 08 9381 3088 Fax: 08 9388 2258 ABN: 50 008 917 717 http://www.postnewspapers.com.au/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Old postgresql repository
On Wed, 2011-07-06 at 16:31 +0200, Magnus Hagander wrote: > When we did the migration to git, we decided to leave the old > postgresql git repository around "for a while", for people who had > clones around it. This is the repository that was live updated from > cvs while we were using cvs, and does *not* correspond to the current > git repository when it comes to hashes and other details. It's > available at http://git.postgresql.org/gitweb?p=postgresql-old.git;a=summary > > I think it's time to drop this repository now. Anybody who had active > clones on it should've moved to the new repository by now, I think. That's OK with me. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make relation_openrv atomic wrt DDL
On Wed, Jul 06, 2011 at 03:06:40PM -0400, Robert Haas wrote: > On Sun, Jun 19, 2011 at 11:42 PM, Noah Misch wrote: > > On Sun, Jun 19, 2011 at 11:44:35PM +0100, Greg Stark wrote: > >> So I was the victim assigned to review this patch. > > > > Thanks for doing so. > > This discussion seems to have died off. Let's see if we can drive > this forward to some conclusion. > > I took a look at this patch and found that it had bit-rotted slightly. > I am attaching a rebased version. Thanks. > Maybe this is a stupid idea, but what about changing the logic so > that, if we get back InvalidOid, we AcceptInvalidationMessages() and > retry if the counter has advanced? ISTM that might cover the example > you mentioned in your last post, where we fail to detect a relation > that has come into existence since our last call to > AcceptInvalidationMessages(). It would cost an extra > AcceptInvalidationMessages() only in the case where we haven't found > the relation, which (a) seems like a good time to worry about whether > we're missing something, since users generally try not to reference > nonexistent tables and (b) should be rare enough to be ignorable from > a performance perspective. Agreed on all points. Good idea. That improves our guarantee from "any client-issued command will see tables committed before its submission" to "_any command_ will see tables committed before its _parsing_". In particular, commands submitted using SPI will no longer be subject to this source of déjà vu. I, too, doubt that looking up nonexistent relations is a performance-critical operation for anyone. > In the department of minor nitpicking, why not use a 64-bit counter > for SharedInvalidMessageCounter? Then we don't have to think very > hard about whether overflow can ever pose a problem. Overflow is fine because I only ever compare values for equality, and I use an unsigned int to give defined behavior at overflow. However, the added cost of a 64-bit counter should be negligible, and future use cases (including external code) might appreciate it. No strong preference. > It strikes me that, even with this patch, there is a fair amount of > room for wonky behavior. For example, as your comment notes, if > search_path = foo, bar, and we've previously referenced "x", getting > "bar.x", the creation of "foo.x" will generate invalidation messages, > but a subsequent reference - within the same transaction - to "x" will > not cause us to read them. It would be nice to > AcceptInvalidationMessages() unconditionally at the beginning of > RangeVarGetRelid() [and then redo as necessary to get a stable > answer], but that might have some performance consequence for > transactions that repeatedly read the same tables. A user doing that should "LOCK bar.x" in the transaction that creates "foo.x", giving a clean cutover. (I thought of documenting that somewhere, but it seemed a tad esoteric.) In the absence of such a lock, an extra unconditional call to AcceptInvalidationMessages() narrows the window in which his commands parse as using the "wrong" table. However, commands that have already parsed will still use the old table without interruption, with no particular bound on when they may finish. I've failed to come up with a use case where the narrower window for parse inconsistencies is valuable but the remaining exposure is acceptable. There may very well be one I'm missing, though. While a mere "LOCK bar.x" is sufficient to get a clean cutover with respect to parsing, it fails to invalidate plans. To really cover all bases, you need some no-op action that invalidates "bar.x". For actual practical use, I'd recommend something like: BEGIN; ALTER TABLE bar.x RENAME TO x0; ALTER TABLE bar.x0 RENAME TO x; CREATE TABLE foo.x ... COMMIT; Probably worth making it more intuitive to DTRT here. Thanks, nm -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
Excerpts from Robert Haas's message of mié jul 06 14:02:13 -0400 2011: > On Wed, Jul 6, 2011 at 1:09 PM, Alvaro Herrera > wrote: > > Excerpts from Robert Haas's message of mié jul 06 12:40:39 -0400 2011: > > > >> This patch removes an impressive amount of boilerplate code and > >> replaces it with something much more compact. I like that. In the > >> interest of full disclosure, I suggested this approach to KaiGai at > >> PGCon, so I'm biased: but even so, I'm pleasantly surprised by the > >> amount of consolidation that appears possible here. > > > > Yeah. Myself, I love the fact that the dropmsgstrings thing is gone. I > > wonder if the routine to obtain "foo doesn't exist, skipping" messages > > could be replaced by judicious use of getObjectDescription. > > I've been told we don't want to go further in that direction for > reasons of translatability. Well, you can split sentences using a colon instead of building them; something like errmsg("object cannot be found, skipping: %s", getObjectDescription(object)) which renders as object cannot be found, skipping: table foobar Now people can complain that these messages are "worse" than the originals which were more specific in nature, but I don't personally see a problem with that. I dunno what's the general opinion though. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On 07/06/2011 04:41 PM, Brar Piening wrote: I certainly could. But as those files are Andrew's work which isn't really related to VS2010 build and could as well be commited seperately I don't want to take credit for it. I'll remove my versions from the patch (v9 probably) if those files get commited. I'm just doing some final testing and preparing to commit the new pgflex and pgbison. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
Original Message Subject: Re: [HACKERS] Review of VS 2010 support patches From: Craig Ringer To: Brar Piening Date: 06.07.2011 14:56 It turns out that VS2010v8.patch is also attached to the same message. Not that you'd know it from the ... interesting ... way the web ui presents attachments. Sorry I missed it. Yes I've also noticed that the web ui has somewhat screwed up the two patches attached to my email. This seems avoidable as one can see in http://archives.postgresql.org/pgsql-hackers/2011-07/msg00140.php but I don't know how. [...] That makes sense. Do you want to integrate those in a v9 revision along wiht a docs patch? I certainly could. But as those files are Andrew's work which isn't really related to VS2010 build and could as well be commited seperately I don't want to take credit for it. I'll remove my versions from the patch (v9 probably) if those files get commited. [...] For the docs, it might be worth being more specific about the visual studio versions. [...] Thanks for the hints I'll consider then once I'll get started with the docs. [...] Now I just need to test with Windows SDK 6.0 (if I can even get it to install on win7 x64; the installer keeps crashing) as that's the SDK shipped with Visual Studio 2005 SP1 . Actually I've also successfully tested an empty (no config.pl) 32-bit build using Visual Studio 2005 RTM. Regards, Brar -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Wed, Jul 6, 2011 at 2:02 PM, Jeff Davis wrote: > On Thu, 2011-06-30 at 19:25 -0400, Robert Haas wrote: >> I'm really hurting >> for is some code review. > > I'm trying to get my head into this patch. I have a couple questions: > > Does this happen to be based on some academic research? I don't > necessarily expect it to be; just thought I'd ask. I did spend some time Googling around for papers and some of the other approaches I've tried (mostly unsuccessfully, thus far) were based on those papers, but I don't remember running across anything that resembled the approach taken in the patch. I think that the patch basically came out what I know about how PostgreSQL works: namely, we take tons of locks, but mostly of a sort that don't conflict with each other. > Here is my high-level understanding of the approach, please correct me > where I'm mistaken: > > Right now, concurrent activity on the same object, even with weak locks, > causes contention because everything has to hit the same global lock > partition. Because we expect an actual conflict to be rare, this patch > kind of turns the burden upside down such that: > (a) those taking weak locks need only acquire a lock on their own lock > in their own PGPROC, which means that it doesn't contend with anyone > else taking out a weak lock; and > (b) taking out a strong lock requires a lot more work, because it needs > to look at every backend in the proc array to see if it has conflicting > locks. > > Of course, those things both have some complexity, because the > operations need to be properly synchronized. You force a valid schedule > by using the memory synchronization guarantees provided by taking those > per-backend locks rather than a centralized lock, thus still avoiding > lock contention in the common (weak locks only) case. You got it. That's a very good summary. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types, constructors, and the type system
Jeff Davis writes: > On Wed, 2011-07-06 at 12:51 -0400, Robert Haas wrote: >> On Wed, Jul 6, 2011 at 12:22 PM, Jeff Davis wrote: >>> To get into some more details: how exactly would this constructor be >>> generated on the fly? Clearly we want only one underlying C function >>> that accepts something like: >>> range_internal(lower, upper, flags, Oid rangetype) >>> So how do we get the rangetype in there? >> I think that the C function could call get_call_result_type() and get >> the return type OID back via the second argument. > I'm also a little unclear on the rules for when that might be set > properly or not. > I ran into problems with that before... I think with the I/O functions. > I don't think that's a problem here, but I thought I'd ask. I think it'd probably be all right to do that. The places where you might find shortcuts being taken are where functions are called directly by C code, such as I/O function calls --- but these constructors should only ever get invoked from SQL queries, no? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Make relation_openrv atomic wrt DDL
On Sun, Jun 19, 2011 at 11:42 PM, Noah Misch wrote: > On Sun, Jun 19, 2011 at 11:44:35PM +0100, Greg Stark wrote: >> So I was the victim assigned to review this patch. > > Thanks for doing so. This discussion seems to have died off. Let's see if we can drive this forward to some conclusion. I took a look at this patch and found that it had bit-rotted slightly. I am attaching a rebased version. Maybe this is a stupid idea, but what about changing the logic so that, if we get back InvalidOid, we AcceptInvalidationMessages() and retry if the counter has advanced? ISTM that might cover the example you mentioned in your last post, where we fail to detect a relation that has come into existence since our last call to AcceptInvalidationMessages(). It would cost an extra AcceptInvalidationMessages() only in the case where we haven't found the relation, which (a) seems like a good time to worry about whether we're missing something, since users generally try not to reference nonexistent tables and (b) should be rare enough to be ignorable from a performance perspective. In the department of minor nitpicking, why not use a 64-bit counter for SharedInvalidMessageCounter? Then we don't have to think very hard about whether overflow can ever pose a problem. It strikes me that, even with this patch, there is a fair amount of room for wonky behavior. For example, as your comment notes, if search_path = foo, bar, and we've previously referenced "x", getting "bar.x", the creation of "foo.x" will generate invalidation messages, but a subsequent reference - within the same transaction - to "x" will not cause us to read them. It would be nice to AcceptInvalidationMessages() unconditionally at the beginning of RangeVarGetRelid() [and then redo as necessary to get a stable answer], but that might have some performance consequence for transactions that repeatedly read the same tables. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company atomic-openrv-v3.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Old postgresql repository
On Wed, Jul 06, 2011 at 04:31:53PM +0200, Magnus Hagander wrote: > When we did the migration to git, we decided to leave the old > postgresql git repository around "for a while", for people who had > clones around it. This is the repository that was live updated from > cvs while we were using cvs, and does *not* correspond to the current > git repository when it comes to hashes and other details. It's > available at http://git.postgresql.org/gitweb?p=postgresql-old.git;a=summary > > I think it's time to drop this repository now. Anybody who had active > clones on it should've moved to the new repository by now, I think. > > Comments? "...Linus will know his own" ;) Cheers, David. -- David Fetter http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE
On Wed, Jul 06, 2011 at 09:55:01AM -0400, Robert Haas wrote: > On Thu, Jun 30, 2011 at 1:02 PM, Robert Haas wrote: > > On first blush, that looks a whole lot cleaner. ?I'll try to find some > > time for a more detailed review soon. > > This seems not to compile for me: > > gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith > -Wdeclaration-after-statement -Wendif-labels -Wformat-security > -fno-strict-aliasing -fwrapv -g -Werror -I../../../src/include > -I/opt/local/include -c -o index.o index.c -MMD -MP -MF > .deps/index.Po > index.c:692: error: conflicting types for ?index_create? > ../../../src/include/catalog/index.h:53: error: previous declaration > of ?index_create? was here > cc1: warnings being treated as errors > index.c: In function ?index_create?: > index.c:821: warning: passing argument 5 of ?heap_create? makes > integer from pointer without a cast > index.c:821: warning: passing argument 6 of ?heap_create? makes > pointer from integer without a cast > index.c:821: error: too few arguments to function ?heap_create? Drat; fixed in this version. My local branches contain a large test battery that I filter out of the diff before posting. This time, that filter also removed an essential part of the patch. Thanks, nm diff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml index a90b4e2..b6fe908 100644 *** a/doc/src/sgml/xindex.sgml --- b/doc/src/sgml/xindex.sgml *** *** 834,846 ALTER OPERATOR FAMILY integer_ops USING btree ADD In a B-tree operator family, all the operators in the family must sort compatibly, meaning that the transitive laws hold across all the data types !supported by the family: if A = B and B = C, then A = !C, and if A < B and B < C, then A < C. For each !operator in the family there must be a support function having the same !two input data types as the operator. It is recommended that a family be !complete, i.e., for each combination of data types, all operators are !included. Each operator class should include just the non-cross-type !operators and support function for its data type. --- 834,848 In a B-tree operator family, all the operators in the family must sort compatibly, meaning that the transitive laws hold across all the data types !supported by the family: if A = B and B = C, then A = C, !and if A < B and B < C, then A < C. Subjecting operands !to any number of implicit casts or binary coercion casts involving only !types represented in the family must not change the result other than to !require a similar cast thereon. For each operator in the family there must !be a support function having the same two input data types as the operator. !It is recommended that a family be complete, i.e., for each combination of !data types, all operators are included. Each operator class should include !just the non-cross-type operators and support function for its data type. *** *** 851,861 ALTER OPERATOR FAMILY integer_ops USING btree ADD by the family's equality operators, even when the values are of different types. This is usually difficult to accomplish when the types have different physical representations, but it can be done in some cases. !Notice that there is only one support function per data type, not one !per equality operator. It is recommended that a family be complete, i.e., !provide an equality operator for each combination of data types. !Each operator class should include just the non-cross-type equality !operator and the support function for its data type. --- 853,865 by the family's equality operators, even when the values are of different types. This is usually difficult to accomplish when the types have different physical representations, but it can be done in some cases. !Implicit casts and binary coercion casts among types represented in the !family must preserve this invariant. Notice that there is only one support !function per data type, not one per equality operator. It is recommended !that a family be complete, i.e., provide an equality operator for each !combination of data types. Each operator class should include just the !non-cross-type equality operator and the support function for its data !type. diff --git a/src/backend/bootstindex f3e85aa..d0a0e92 100644 *** a/src/backend/bootstrap/bootparse.y --- b/src/backend/bootstrap/bootparse.y *** *** 217,222 Boot_CreateStmt: --- 217,223 PG_CATALOG_NAMESPACE, shared_relation ? GLOBALTABLESPACE_OID : 0,
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Thu, 2011-06-30 at 19:25 -0400, Robert Haas wrote: > I'm really hurting > for is some code review. I'm trying to get my head into this patch. I have a couple questions: Does this happen to be based on some academic research? I don't necessarily expect it to be; just thought I'd ask. Here is my high-level understanding of the approach, please correct me where I'm mistaken: Right now, concurrent activity on the same object, even with weak locks, causes contention because everything has to hit the same global lock partition. Because we expect an actual conflict to be rare, this patch kind of turns the burden upside down such that: (a) those taking weak locks need only acquire a lock on their own lock in their own PGPROC, which means that it doesn't contend with anyone else taking out a weak lock; and (b) taking out a strong lock requires a lot more work, because it needs to look at every backend in the proc array to see if it has conflicting locks. Of course, those things both have some complexity, because the operations need to be properly synchronized. You force a valid schedule by using the memory synchronization guarantees provided by taking those per-backend locks rather than a centralized lock, thus still avoiding lock contention in the common (weak locks only) case. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
On Wed, Jul 6, 2011 at 1:09 PM, Alvaro Herrera wrote: > Excerpts from Robert Haas's message of mié jul 06 12:40:39 -0400 2011: > >> This patch removes an impressive amount of boilerplate code and >> replaces it with something much more compact. I like that. In the >> interest of full disclosure, I suggested this approach to KaiGai at >> PGCon, so I'm biased: but even so, I'm pleasantly surprised by the >> amount of consolidation that appears possible here. > > Yeah. Myself, I love the fact that the dropmsgstrings thing is gone. I > wonder if the routine to obtain "foo doesn't exist, skipping" messages > could be replaced by judicious use of getObjectDescription. I've been told we don't want to go further in that direction for reasons of translatability. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
Excerpts from Robert Haas's message of mié jul 06 12:40:39 -0400 2011: > This patch removes an impressive amount of boilerplate code and > replaces it with something much more compact. I like that. In the > interest of full disclosure, I suggested this approach to KaiGai at > PGCon, so I'm biased: but even so, I'm pleasantly surprised by the > amount of consolidation that appears possible here. Yeah. Myself, I love the fact that the dropmsgstrings thing is gone. I wonder if the routine to obtain "foo doesn't exist, skipping" messages could be replaced by judicious use of getObjectDescription. > Here, we're essentially duplicating that information in another > place, which doesn't seem good. I think perhaps we should create a > big static array, each row of which would contain: > > - ObjectType > - system cache ID for OID lookups > - system catalog table OID for scans > - attribute number for the name attribute, where applicable (see > AlterObjectNamespace) > - attribute number for the namespace attribute > - attribute number for the owner attribute > - ...and maybe some other properties +1 for this general approach. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types, constructors, and the type system
On Wed, 2011-07-06 at 12:51 -0400, Robert Haas wrote: > On Wed, Jul 6, 2011 at 12:22 PM, Jeff Davis wrote: > > To get into some more details: how exactly would this constructor be > > generated on the fly? Clearly we want only one underlying C function > > that accepts something like: > > range_internal(lower, upper, flags, Oid rangetype) > > So how do we get the rangetype in there? > > I think that the C function could call get_call_result_type() and get > the return type OID back via the second argument. I'm also a little unclear on the rules for when that might be set properly or not. I ran into problems with that before... I think with the I/O functions. I don't think that's a problem here, but I thought I'd ask. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review: psql include file using relative path
On Wed, Jul 6, 2011 at 11:58 AM, Robert Haas wrote: > On Mon, Jun 6, 2011 at 10:11 PM, Gurjeet Singh > wrote: > > On Mon, Jun 6, 2011 at 9:48 PM, Josh Kupershmidt > wrote: > >> > >> On Sun, Jun 5, 2011 at 8:16 PM, Gurjeet Singh > >> wrote: > >> > Attached an updated patch. > >> > > >> > If you find it ready for committer, please mark it so in the > commitfest > >> > app. > >> > >> I can't find anything further to nitpick with this patch, and have > >> marked it Ready For Committer in the CF. Thanks for your work on this, > >> I am looking forward to the feature. > > > > Thanks for your reviews and perseverance :) > > I committed this after substantial further revisions: > > - I rewrote the changes to process_file() to use the pathname-handling > functions in src/port, rather custom code. Along the way, relpath > became a constant-size buffer, which should be OK since > join_pathname_components() knows about MAXPGPATH. This has what I > consider to be a useful side effect of not calling pg_malloc() here, > which means we don't have to remember to free the memory. > > - I added a safeguard against someone doing something like "\ir E:foo" > on Windows. Although that's not an absolute path, for purposes of \ir > it needs to be treated as one. I don't have a Windows build > environment handy so someone may want to test that I haven't muffed > this. > > - I rewrote the documentation and a number of the comments to be (I > hope) more clear. > > - I reverted some unnecessary whitespace changes in exec_command(). > > - As proposed, the patch declared process_file with a non-constant > initialized and then declared another variable after that. I believe > some old compilers will barf on that. Since it isn't needed in that > block anyway, I moved it to an inner block. > > - I incremented the pager line count for psql's help. > Thank you Robert and Josh for all the help. -- Gurjeet Singh EnterpriseDB Corporation The Enterprise PostgreSQL Company
Re: [HACKERS] Range Types, constructors, and the type system
On Wed, Jul 6, 2011 at 12:22 PM, Jeff Davis wrote: > To get into some more details: how exactly would this constructor be > generated on the fly? Clearly we want only one underlying C function > that accepts something like: > range_internal(lower, upper, flags, Oid rangetype) > So how do we get the rangetype in there? I think that the C function could call get_call_result_type() and get the return type OID back via the second argument. > Also, are default arguments always applied in all the contexts where > this function might be called? Uh, I'm not sure. But I don't see why it would need different handling than any other function which takes default arguments. It shouldn't be needed during bootstrapping or anything funky like that. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
On Tue, Jun 28, 2011 at 4:17 AM, Kohei KaiGai wrote: > The attached patch is rebased one to consolidate routines to remove objects > using the revised get_object_address(). > > The new RemoveObjects() replaces the following routines; having > similar structure. > - RemoveRelations > - RemoveTypes > - DropCollationsCommand > - DropConversionsCommand > - RemoveSchemas > - RemoveTSParsers > - RemoveTSDictionaries > - RemoveTSTemplates > - RemoveTSConfigurations > - RemoveExtensions > > I guess the most arguable part of this patch is modifications to > get_relation_by_qualified_name(). > > This patch breaks down the relation_openrv_extended() into > a pair of RangeVarGetRelid() and relation_open() to inject > LockRelationOid() between them, because index_drop() logic > requires the table owning the target index to be locked prior to > the index itself. This patch removes an impressive amount of boilerplate code and replaces it with something much more compact. I like that. In the interest of full disclosure, I suggested this approach to KaiGai at PGCon, so I'm biased: but even so, I'm pleasantly surprised by the amount of consolidation that appears possible here. I think that get_object_namespace() needs to be rethought. If you take a look at AlterObjectNamespace() and its callers, you'll notice that we already have, encoded in those call sites, the knowledge of which object types can be looked up in which system caches, and which columns within the resulting heap tuples will contain the schema OIDs. Here, we're essentially duplicating that information in another place, which doesn't seem good. I think perhaps we should create a big static array, each row of which would contain: - ObjectType - system cache ID for OID lookups - system catalog table OID for scans - attribute number for the name attribute, where applicable (see AlterObjectNamespace) - attribute number for the namespace attribute - attribute number for the owner attribute - ...and maybe some other properties We could then create a function (in objectaddress.c, probably) that you call with an ObjectType argument, and it returns a pointer to the appropriate struct entry, or calls elog() if none is found. This function could be used by the existing object_exists() functions in lieu of having its own giant switch statement, by AlterObjectNamespace(), and by this new consolidated DROP code. If you agree with this approach, we should probably make it a separate patch. Other minor things I noticed: - get_object_namespace() itself does not need to take both an ObjectType and an ObjectAddress - just an ObjectAddress should be sufficient. - dropcmds.c has a header comment saying dropcmd.c - RemoveObjects() could use forboth() instead of inventing its own way to loop over two lists in parallel - Why does RemoveObjects() need to call RelationForgetRelation()? - It might be cleaner, rather than hacking up get_relation_by_qualified_name(), to just have special-purpose code for dropping relations. it looks like you will need at least two special cases for relations as opposed to other types of objects that someone might want to drop, so it may make sense to keep them separate. Remember, in any case, that we don't want to redefine get_relation_by_qualified_name() for other callers, who don't have the same needs as RemoveObjects(). This would also avoid things like this: -ERROR: view "atestv4" does not exist +ERROR: relation "atestv4" does not exist ...which are probably not desirable. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types, constructors, and the type system
On Wed, 2011-07-06 at 09:10 -0400, Robert Haas wrote: > > There's some slight ugliness around the NULL/infinity business, but I > > think that I could be convinced. I'd like to avoid confusion between > > NULL and infinity if possible. > > I was thinking that if you passed 'i' for one of the bounds, it would > ignore the supplied argument and substitute its special infinity > value. But you'd still need to supply some argument in that position, > which could be NULL or anything else. It doesn't really seem worth > having additional constructor functions to handle the case where one > or both arguments are infinite. Right, that's what I assumed that you meant. I can't think of anything better, either, because I like the fact that two arguments are there so that you can visually see which sides are bounded/unbounded. I suppose we could have constructors like: range(text, subtype) and range(subtype, text) where the text field is used to specify "infinity". But that has the obvious problem "what if the subtype is text?". So, of course, we make a special new pseudotype to represent infinity... ;) But seriously, your idea is starting to look more appealing. To get into some more details: how exactly would this constructor be generated on the fly? Clearly we want only one underlying C function that accepts something like: range_internal(lower, upper, flags, Oid rangetype) So how do we get the rangetype in there? I suppose a default 4th argument? That would be kind of an interesting option, but what if someone actually specified that 4th argument? We couldn't allow that. Also, are default arguments always applied in all the contexts where this function might be called? Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Old postgresql repository
On Wed, Jul 6, 2011 at 10:31 AM, Magnus Hagander wrote: > When we did the migration to git, we decided to leave the old > postgresql git repository around "for a while", for people who had > clones around it. This is the repository that was live updated from > cvs while we were using cvs, and does *not* correspond to the current > git repository when it comes to hashes and other details. It's > available at http://git.postgresql.org/gitweb?p=postgresql-old.git;a=summary > > I think it's time to drop this repository now. Anybody who had active > clones on it should've moved to the new repository by now, I think. > > Comments? OK by me. Thanks for holding off. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review: psql include file using relative path
On Mon, Jun 6, 2011 at 10:11 PM, Gurjeet Singh wrote: > On Mon, Jun 6, 2011 at 9:48 PM, Josh Kupershmidt wrote: >> >> On Sun, Jun 5, 2011 at 8:16 PM, Gurjeet Singh >> wrote: >> > Attached an updated patch. >> > >> > If you find it ready for committer, please mark it so in the commitfest >> > app. >> >> I can't find anything further to nitpick with this patch, and have >> marked it Ready For Committer in the CF. Thanks for your work on this, >> I am looking forward to the feature. > > Thanks for your reviews and perseverance :) I committed this after substantial further revisions: - I rewrote the changes to process_file() to use the pathname-handling functions in src/port, rather custom code. Along the way, relpath became a constant-size buffer, which should be OK since join_pathname_components() knows about MAXPGPATH. This has what I consider to be a useful side effect of not calling pg_malloc() here, which means we don't have to remember to free the memory. - I added a safeguard against someone doing something like "\ir E:foo" on Windows. Although that's not an absolute path, for purposes of \ir it needs to be treated as one. I don't have a Windows build environment handy so someone may want to test that I haven't muffed this. - I rewrote the documentation and a number of the comments to be (I hope) more clear. - I reverted some unnecessary whitespace changes in exec_command(). - As proposed, the patch declared process_file with a non-constant initialized and then declared another variable after that. I believe some old compilers will barf on that. Since it isn't needed in that block anyway, I moved it to an inner block. - I incremented the pager line count for psql's help. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Crash dumps
On Wed, 06 Jul 2011 07:59:12 +0800, Craig Ringer wrote: On 5/07/2011 9:05 PM, Magnus Hagander wrote: On Tue, Jul 5, 2011 at 15:02, Robert Haas wrote: On Mon, Jul 4, 2011 at 12:47 PM, Radosław Smogura wrote: I asked about crash reports becaus of at this time there was thread about crashing in live system. Yeah, I thought this was the result of that effort: [snip] That crash dump is basically the windows equivalent of a coredump, though. Just a different name... Yup, it's a cut-down core dump. In this case generated in-process by the crashing backend. It'd be nice to be able to generate the crash dump from out-of-process. Unfortunately, the automatic crash dump generation system on Windows doesn't appear to be available to system services running non-interactively. Not that I could see, anyway. As a result we had to trap the crashes within the crashing process and generate the dump from there. As previously stated, doing anything within a segfaulting process is unreliable, so it's not the best approach in the world. All I was saying in this thread is that it'd be nice to have a way for a crashing backend to request that another process capture diagnostic information from it before it exits with a fault, so it doesn't have to try to dump its self. As Tom said, though, anything like that is more likely to decrease the reliability of the overall system. You don't want a dead backend hanging around forever waiting for the postmaster to act on it, and you *really* don't want other backends still alive and potentially writing from shm that's in in who-knows-what state while the postmaster is busy fiddling with a crashed backend. So, overall, I think "dump a simple core and die as quickly as possible" is the best option. That's how it already works on UNIX, and all the win32 crash dump patches do is make it work on Windows too. -- Craig Ringer POST Newspapers 276 Onslow Rd, Shenton Park Ph: 08 9381 3088 Fax: 08 9388 2258 ABN: 50 008 917 717 http://www.postnewspapers.com.au/ Personally I will not send core dump to anyone, core dump may not only contain sensible information from postmaster, but from other application too. Btw, I just take core dump form postmaster, I found there some dns addresses I connected before from bash. Postamster should not see it. I think IPC for fast shout down all backends and wait for report processing is quite enaugh. Regards, Radosław Smogura -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Old postgresql repository
When we did the migration to git, we decided to leave the old postgresql git repository around "for a while", for people who had clones around it. This is the repository that was live updated from cvs while we were using cvs, and does *not* correspond to the current git repository when it comes to hashes and other details. It's available at http://git.postgresql.org/gitweb?p=postgresql-old.git;a=summary I think it's time to drop this repository now. Anybody who had active clones on it should've moved to the new repository by now, I think. Comments? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] storing TZ along timestamps
Hi, On May 27, 2011, at 11:43 PM, Alvaro Herrera wrote: > > One of our customers is interested in being able to store original > timezone along with a certain timestamp. > > It is currently possible to store a TZ in a separate column, but this is > a bit wasteful and not very convenient anyway. > > There are all sorts of UI issues that need to be resolved in order for > this to be a complete feature proposal, but the first thing that we > discussed was what is the storage going to look like. Of course, one > thing we don't want is to store the complete TZ name as text. > > So the first thing is cataloguing timezone names, and assigning an ID to > each (maybe an OID). If we do that, then we can store the OID of the > timezone name along the int64/float8 of the actual timestamp value. So, I'd think there are 2 reasonable approaches to storing the timezone part: 1. Store the timezone abbreviation (i.e. 'EST' along w/ the timestamp data). 2. Assign OID to each of the timezones and store it w/ the timestamp. The first option seem to avoid the necessity of creating a new system catalog for timezone information and the burden of updating it, because current implementation is already capable of translating abbreviations to useful timezone information. The question is, whether just a TZ abbreviation is sufficient to uniquely identify the timezone and get the offset and DST rules. If it's not sufficient, how conflicting TZ short names are handled in the current code (i.e. 'AT TIME ZONE ...')? The second choice doesn't avoids the issue of ambiguous names, although it requires moving TZ information inside the database and providing some means to update it. There were mentions of potential problems w/ pg_upgrade and pg_dump, if we add a massive amount of oids for the timezones. What are these problems specifically? I'd thing that storing TZ abbreviations is more straightforward and easier to implement, unless there are too ambiguous to identify the timezone correctly. > Note that I am currently proposing to store only the zone > names in the catalog, not the full TZ data. Where would we store other bits of timezone information? Wouldn't it be inconvenient to update names in system catalogs and DST rules elsewhere? Alexey. -- Command Prompt, Inc. http://www.CommandPrompt.com PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql describe.c cleanup
On Thu, Jun 16, 2011 at 10:05 AM, Merlin Moncure wrote: > On Wed, Jun 15, 2011 at 9:01 AM, Merlin Moncure wrote: >> On Tue, Jun 14, 2011 at 9:08 PM, Josh Kupershmidt wrote: >>> On Tue, Jun 14, 2011 at 12:15 PM, Merlin Moncure wrote: What I do wonder though is if the ; appending should really be happening in printQuery() instead of in each query -- the idea being that formatting for external consumption should be happening in one place. Maybe that's over-thinking it though. >>> >>> That's a fair point, and hacking up printQuery() would indeed solve my >>> original gripe about copy-pasting queries out of psql -E. But more >>> generally, I think that standardizing the style of the code is a Good >>> Thing, particularly where style issues impact usability (like here). >> >> sure -- if anyone would like to comment on this one way or the other >> feel free -- otherwise I'll pass the patch up the chain as-is...it's >> not exactly the 'debate of the century' :-). > > done. marked ready for committer. Committed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoid index rebuilds for no-rewrite ALTER TABLE ALTER TYPE
On Thu, Jun 30, 2011 at 1:02 PM, Robert Haas wrote: > On first blush, that looks a whole lot cleaner. I'll try to find some > time for a more detailed review soon. This seems not to compile for me: gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -g -Werror -I../../../src/include -I/opt/local/include -c -o index.o index.c -MMD -MP -MF .deps/index.Po index.c:692: error: conflicting types for ‘index_create’ ../../../src/include/catalog/index.h:53: error: previous declaration of ‘index_create’ was here cc1: warnings being treated as errors index.c: In function ‘index_create’: index.c:821: warning: passing argument 5 of ‘heap_create’ makes integer from pointer without a cast index.c:821: warning: passing argument 6 of ‘heap_create’ makes pointer from integer without a cast index.c:821: error: too few arguments to function ‘heap_create’ -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types, constructors, and the type system
On Wed, Jul 6, 2011 at 1:19 AM, Jeff Davis wrote: > On Tue, 2011-07-05 at 13:06 -0400, Robert Haas wrote: >> On Tue, Jul 5, 2011 at 12:54 PM, Jeff Davis wrote: >> > It would be something like: range_co(1,8)::int8range >> > >> > (just so we're comparing apples to apples) >> > >> > The intermediate type proposal doesn't require that we move the "c" and >> > "o" into the parameter list. >> >> Well, you have to specify the bounds somewhere... > > That's true. In my example it's in the function name. > >> OK, so let's pass the information on the bounds as a separate >> argument. Like this: >> >> int8range(1,8,'co') > > That has a lot going for it, in the sense that it avoids dealing with > the type problems. > >> Then you can instead pass 'o' for open or 'i' for infinity (passing >> NULL for the corresponding argument position in that case). The third >> argument can be optional and default to 'cc'. > > The fact that there can be a default for the third argument makes this > quite a lot more appealing than I had originally thought (although I > think 'co' is the generally-accepted default). > > There's some slight ugliness around the NULL/infinity business, but I > think that I could be convinced. I'd like to avoid confusion between > NULL and infinity if possible. I was thinking that if you passed 'i' for one of the bounds, it would ignore the supplied argument and substitute its special infinity value. But you'd still need to supply some argument in that position, which could be NULL or anything else. It doesn't really seem worth having additional constructor functions to handle the case where one or both arguments are infinite. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On 6/07/2011 2:15 AM, Brar Piening wrote: I've replied on-list see: http://archives.postgresql.org/pgsql-hackers/2011-07/msg00066.php Ah, sorry I missed that. I generally can't keep up with -hackers and have to rely on being cc'd. The patch (VS2010v7.patch) seems to mix significant changes with whitespace fixes etc. Current version (VS2010v8.patch) which I've submitted on-list about one month ago has fixed this as per Tom Lane's comment. See: http://archives.postgresql.org/message-id/4dedb6ee.9060...@gmx.de That's what threw me, actually. The patch is named "perltidy_before.patch"; I didn't see a separate VS2010v8.patch or link to one and was trying to figure out how perltidy_before.patch related to VS2010v7.patch . It turns out that VS2010v8.patch is also attached to the same message. Not that you'd know it from the ... interesting ... way the web ui presents attachments. Sorry I missed it. I think the approach Andrew Dunstan chose (parsing the Makefiles) is even more flexible and future proof. We should probably be using his versions. See: http://archives.postgresql.org/pgsql-hackers/2011-07/msg00140.php and http://archives.postgresql.org/pgsql-hackers/2011-07/msg00185.php That makes sense. Do you want to integrate those in a v9 revision along wiht a docs patch? For the docs, it might be worth being more specific about the visual studio versions. Instead of: "PostgreSQL supports the compilers from Visual Studio 2005 and Visual Studio 2008. When using the Platform SDK only, or when building for 64-bit Windows, only Visual Studio 2008 is supported." I'd suggest writing: "PostgreSQL supports compilation the compilers shipped with Visual Studio 2005, 2008 and 2010 (including Express editions), as well as standalone Windows SDK releases 6.0 to 7.1. Only 32-bit PostgreSQL builds are supported with SDK versions prior to 6.1 and Visual Studio versions prior to 2008." Additionally, it might be worth expanding on "If you wish to build a 64-bit version, you must use the 64-bit version of the command, and vice versa". The free SDKs don't install both 32-bit and 64-bit environment start menu items; they seem to just pick the local host architecture. My 7.1 SDK only has a start menu launcher for x64. So: Perhaps it's worth mentioning that the "setenv" command can be used from within a Windows SDK shell to switch architectures. "setenv /?" produces help. For Visual Studio, use \VC\vcvarsall.bat in your Visual Studio installation directory. See: http://msdn.microsoft.com/en-us/library/x4d2c09s(v=VS.100).aspx Actually my default builds are 64-bit builds as my PC is Win7 x64 and I'm using 64-Bit versions for my PostgreSQL work. Ah, OK. Good to know. I had no problems doing an x64 build using the Windows SDK version 7.1, and tests passed fine. Now I just need to test with Windows SDK 6.0 (if I can even get it to install on win7 x64; the installer keeps crashing) as that's the SDK shipped with Visual Studio 2005 SP1 . -- Craig Ringer POST Newspapers 276 Onslow Rd, Shenton Park Ph: 08 9381 3088 Fax: 08 9388 2258 ABN: 50 008 917 717 http://www.postnewspapers.com.au/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cascade replication
On Wed, Jul 6, 2011 at 12:27 PM, Fujii Masao wrote: > On Wed, Jul 6, 2011 at 4:53 PM, Fujii Masao wrote: >> On Wed, Jul 6, 2011 at 2:44 PM, Simon Riggs wrote: 1. De-archive the file to RECOVERYXLOG 2. If RECOVERYXLOG is valid, remove a pre-existing one and rename RECOVERYXLOG to the correct name 3. Replay the file with the correct name >>> >>> Yes please, that makes sense. > > In #2, if the server is killed with SIGKILL just after removing a pre-existing > file and before renaming RECOVERYXLOG, we lose the file with correct name. > Even in this case, we would be able to restore it from the archive, but what > if > unfortunately the archive is unavailable? We would lose the file infinitely. > So > we should introduce the following safeguard? > > 2'. If RECOVERYXLOG is valid, move a pre-existing file to pg_xlog/backup, > rename RECOVERYXLOG to the correct name, and remove the pre-existing > file from pg_xlog/backup > > Currently we give up a recovery if there is the target file in > neither the > archive nor pg_xlog. But, if we adopt the above safeguard, in that > case, > we should try to read the file from also pg_xlog/backup. > > In #2, there is another problem; walsender might have the pre-existing file > open, so the startup process would need to request walsenders to close the > file before removing (or renaming) it, wait for new file to appear and open it > again. This might make the code complicated. Does anyone have better > approach? The risk you describe already exists in current code. I regard it as a non-risk. The unlink() and the rename() are executed consecutively, so the gap between them is small, so the chance of a SIGKILL in that gap at the same time as losing the archive seems low, and we can always get that file from the master again if we are streaming. Any code you add to "fix" this will get executed so rarely it probably won't work when we need it to. In the current scheme we restart archiving from the last restartpoint, which exists only on the archive. This new patch improves upon this by keeping the most recent files locally, so we are less expose in the case of archive unavailability. So this patch already improves things and we don't need any more than that. No extra code please, IMHO. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proper format for printing GetLastError()
Yeah, I noticed that myself recently. On 6 July 2011 12:48, Magnus Hagander wrote: > Nope. I think it's only in there because of lazyness, in general. %lu > seems to be the correct choice. Yes, it's the correct choice. >> Thirdly, why are we not trying to print a textual message? > > I'd say that depends on where it is. In some cases probably because > it's "can never happen" messages. In other cases because, well, no > reason :) I'd also say it has something to do with the win32 API being as ugly and counter-intuitive as it is. Take a look at this: http://msdn.microsoft.com/en-us/library/ms680582(v=VS.85).aspx -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proper format for printing GetLastError()
On Mon, Jul 4, 2011 at 17:29, Peter Eisentraut wrote: > About half of our code prints GetLastError() using %d after casting it > to int (actually, about half of that half uses %i, another thing to sort > out, perhaps), and the other half uses %lu without casting. I gather > from online documentation that GetLastError() returns DWORD, which > appears to be unsigned 32 bits. So using %lu appears to be more > correct. Any arguments against standardizing on %lu? Nope. I think it's only in there because of lazyness, in general. %lu seems to be the correct choice. > Secondly, it might also be good if we could standardize on printing > > actual message: error code %lu > > instead of just > > actual message: %lu Or "actual error code: %lu"? > Thirdly, why are we not trying to print a textual message? I'd say that depends on where it is. In some cases probably because it's "can never happen" messages. In other cases because, well, no reason :) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cascade replication
On Wed, Jul 6, 2011 at 4:53 PM, Fujii Masao wrote: > On Wed, Jul 6, 2011 at 2:44 PM, Simon Riggs wrote: >>> 1. De-archive the file to RECOVERYXLOG >>> 2. If RECOVERYXLOG is valid, remove a pre-existing one and rename >>> RECOVERYXLOG to the correct name >>> 3. Replay the file with the correct name >> >> Yes please, that makes sense. In #2, if the server is killed with SIGKILL just after removing a pre-existing file and before renaming RECOVERYXLOG, we lose the file with correct name. Even in this case, we would be able to restore it from the archive, but what if unfortunately the archive is unavailable? We would lose the file infinitely. So we should introduce the following safeguard? 2'. If RECOVERYXLOG is valid, move a pre-existing file to pg_xlog/backup, rename RECOVERYXLOG to the correct name, and remove the pre-existing file from pg_xlog/backup Currently we give up a recovery if there is the target file in neither the archive nor pg_xlog. But, if we adopt the above safeguard, in that case, we should try to read the file from also pg_xlog/backup. In #2, there is another problem; walsender might have the pre-existing file open, so the startup process would need to request walsenders to close the file before removing (or renaming) it, wait for new file to appear and open it again. This might make the code complicated. Does anyone have better approach? Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: enhanced get diagnostics statement 2
(2011/06/02 17:39), Pavel Stehule wrote: > This patch enhances a GET DIAGNOSTICS statement functionality. It adds > a possibility of access to exception's data. These data are stored on > stack when exception's handler is activated - and these data are > access-able everywhere inside handler. It has a different behave (the > content is immutable inside handler) and therefore it has modified > syntax (use keyword STACKED). This implementation is in conformance > with ANSI SQL and SQL/PSM - implemented two standard fields - > RETURNED_SQLSTATE and MESSAGE_TEXT and three PostgreSQL specific > fields - PG_EXCEPTION_DETAIL, PG_EXCEPTION_HINT and > PG_EXCEPTION_CONTEXT. > > The GET STACKED DIAGNOSTICS statement is allowed only inside > exception's handler. When it is used outside handler, then diagnostics > exception 0Z002 is raised. > > This patch has no impact on performance. It is just interface to > existing stacked 'edata' structure. This patch doesn't change a > current behave of GET DIAGNOSTICS statement. > > CREATE OR REPLACE FUNCTION public.stacked_diagnostics_test02() > RETURNS void > LANGUAGE plpgsql > AS $function$ > declare _detail text; _hint text; _message text; > begin >perform ... > exception when others then >get stacked diagnostics > _message = message_text, > _detail = pg_exception_detail, > _hint = pg_exception_hint; >raise notice 'message: %, detail: %, hint: %', _message, _detail, _hint; > end; > $function$ > > All regress tests was passed. Hi Pavel, I've reviewed your patch according to the page "Reviewing a patch". During the review, I referred to Working-Draft of SQL 2003 to confirm the SQL specs. Submission review = * The patch is in context diff format. * The patch couldn't be applied cleanly to the current head. But it requires only one hunk to be offset, and it could be fixed easily. I noticed that new variables needs_xxx, which were added to struct PLpgSQL_condition, are not used at all. They should be removed, or something might be overlooked. * The patch includes reasonable regression tests. The patch also includes hunks for pl/pgsql document which describes new feature. But it would need some corrections: - folding too-long lines - fixing some grammatical errors (maybe) - clarify difference between CURRENT and STACKED I think that adding new section for GET STACKED DIAGNOSTICS would help to clarify the difference, because the keyword STACKED can be used only in exception clause, and available information is different from the one available for GET CURRENT DIAGNOSTICS. Please find attached a patch which includes a proposal for document though it still needs review by English speaker. Usability review * The patch extends GET DIAGNOSTICS syntax to accept new keywords CURRENT and STACKED, which are described in the SQL/PSM standard. This feature allows us to retrieve exception information in EXCEPTION clause. Naming of PG-specific fields might be debatable. * I think it's useful to get detailed information inside EXCEPTION clause. * We don't have this feature yet. * This patch follows SQL spec of GET DIAGNOSTICS, and extends about PG-specific variables. * pg_dump support is not required for this feature. * AFAICS, this patch doesn't have any danger, such as breakage of backward compatibility. Feature test * The new feature introduced by the patch works well. I tested about: - CURRENT doesn't affect existing feature - STACKED couldn't be used outside EXCEPTION clause - Values could be retrieved via RETURNED_SQLSTATE, MESSAGE_TEXT, PG_EXCEPTION_DETAIL, PG_EXCEPTION_HINT and PG_EXCEPTION_CONTEXT - Invalid item names properly cause error. * I'm not so familiar to pl/pgsql, but ISTM that enough cases are considered about newly added diagnostics items. * I didn't see any crash during my tests. In conclusion, this patch still needs some effort to be "Ready for Committer", so I'll push it back to "Waiting on Author". Regards, -- Shigeru Hanada diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index 3d07b6e..7df69a7 100644 *** a/doc/src/sgml/plpgsql.sgml --- b/doc/src/sgml/plpgsql.sgml *** EXECUTE format('UPDATE tbl SET %I = $1 W *** 1387,1393 command, which has the form: ! GET CURRENT | STACKED DIAGNOSTICS variable = item , ... ; This command allows retrieval of system status indicators. Each --- 1387,1393 command, which has the form: ! GET CURRENT DIAGNOSTICS variable = item , ... ; This command allows retrieval of system status indicators. Each *** GET DIAGNOSTICS integer_var = ROW_COUNT; *** 1486,1506 affect only the current function. ! Inside a exception handler is possible to use a stacked diagnostics statement. It ! allows a access to exception's data: the RETURNED_SQLSTATE contains ! a SQLSTATE of
Re: [HACKERS] Cascade replication
On Wed, Jul 6, 2011 at 8:53 AM, Fujii Masao wrote: >>> What about outputing something like the following message in that case? >>> >>> if ("walsender receives SIGUSR2") >>> ereport(LOG, "terminating walsender process due to >>> administrator command"); >> >> ...which doesn't explain the situation because we don't know why >> SIGUSR2 was sent. >> >> I was thinking of something like this >> >> LOG: requesting walsenders for cascading replication reconnect to >> update timeline > > Looks better than my proposal. > >> but then I ask: Why not simply send a new message type saying "new >> timeline id is X" and that way we don't need to restart the connection >> at all? > > Yeah, that's very useful. But I'd like to implement that as a separate patch. > > I'm thinking that in that case walsender should send the timeline history file > and walreceiver should write it down to the disk, instead of just sending > timeline ID. Otherwise, when the standby in receive side restarts, it cannot > calculate the latest timeline ID correctly. OK, happy to do that part as an additional patch. That way we can get this committed soon - in this CF. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Cascade replication
On Wed, Jul 6, 2011 at 2:44 PM, Simon Riggs wrote: >> 1. De-archive the file to RECOVERYXLOG >> 2. If RECOVERYXLOG is valid, remove a pre-existing one and rename >> RECOVERYXLOG to the correct name >> 3. Replay the file with the correct name > > Yes please, that makes sense. Will do. >>> Those changes will make this code cleaner for the long term. >>> >>> I don't think we should simply shutdown a WALSender when we startup. >>> That is indistinguishable from a failure, which is going to be very >>> worrying if we do a switchover. Is there another way to do this? Or if >>> not, at least a log message to explain it was normal that we requested >>> this. >> >> What about outputing something like the following message in that case? >> >> if ("walsender receives SIGUSR2") >> ereport(LOG, "terminating walsender process due to >> administrator command"); > > ...which doesn't explain the situation because we don't know why > SIGUSR2 was sent. > > I was thinking of something like this > > LOG: requesting walsenders for cascading replication reconnect to > update timeline Looks better than my proposal. > but then I ask: Why not simply send a new message type saying "new > timeline id is X" and that way we don't need to restart the connection > at all? Yeah, that's very useful. But I'd like to implement that as a separate patch. I'm thinking that in that case walsender should send the timeline history file and walreceiver should write it down to the disk, instead of just sending timeline ID. Otherwise, when the standby in receive side restarts, it cannot calculate the latest timeline ID correctly. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] fixing PQsetvalue()
Hello. Any news on these issues? Becuase beta3 is scheduled for July 11th... You wrote: MM> On Jun 6 MM> (http://archives.postgresql.org/pgsql-hackers/2011-06/msg00272.php), MM> Pavel discovered an issue with PQsetvalue that could cause libpq to MM> wander off into unallocated memory that was present in 9.0.x. A MM> fairly uninteresting fix was quickly produced, but Tom indicated MM> during subsequent review that he was not happy with the behavior of MM> the function. Everyone was busy with the beta wrap at the time so I MM> didn't press the issue. MM> A little history here: PQsetvalue's MM> (http://www.postgresql.org/docs/9.0/static/libpq-misc.html) main MM> reason to exist is to allow creating a PGresult out of scratch data on MM> a result created via PQmakeEmptyResult(). This behavior works as MM> intended and is not controversial...it was initially done to support MM> libpqtypes but has apparently found use in other places like ecpg. MM> PQsetvalue *also* allows you to mess with results returned by the MM> server using standard query methods for any tuple, not just the one MM> you are creating as you iterate through. This behavior was MM> unnecessary in terms of what libpqtypes and friends needed and may (as MM> Tom suggested) come back to bite us at some point. As it turns out, MM> PQsetvalue's operation on results that weren't created via MM> PQmakeEmptyResult was totally busted because of the bug, so we have a MM> unique opportunity to tinker with libpq here: you could argue that you MM> have a window of opportunity to change the behavior here since we know MM> it isn't being usefully used. I think it's too late for that but it's MM> if it had to be done the time is now. MM> Pavel actually has been requesting to go further with being able to MM> mess around with PGresults (see: MM> http://archives.postgresql.org/pgsql-interfaces/2011-06/msg0.php), MM> such that the result would start to have some 'recordset' features you MM> find in various other libraries. Maybe it's not the job of libpq to MM> do that, but I happen to think it's pretty cool. Anyways, something MM> has to be done -- I hate to see an unpatched known 9.0 issue remain in MM> the wild. MM> merlin -- With best wishes, Pavel mailto:pa...@gf.microolap.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Small documentation issue
Tom Lane wrote: >>> In fdwhandler.sgml, chapter fdwhandler has only one subsection >>> (fdw-routines). >>> >>> If there is only one subsection, no table of contents is generated in >>> the chapter. [...] > I don't know how to change the doc toolchain to do that either. But > on reflection it seemed to me that this documentation was badly designed > anyhow, because it mixed up description of the FDW's routines with > actual introductory text. I split that into an introduction and a > for the routines. Problem solved. Thanks! Laurenz Albe -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers