Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive
On 2016-01-26 13:22:09 +0530, Amit Kapila wrote: > @@ -633,9 +633,11 @@ postgres 27093 0.0 0.0 30096 2752 ?Ss > 11:34 0:00 postgres: ser > Time when the state was last changed > > > - waiting > - boolean > - True if this backend is currently waiting on a lock > + wait_event > + text > + Wait event name if backend is currently waiting, otherwise > + process not waiting > + > > I still think this is a considerable regression in pg_stat_activity usability. There are lots of people out there that have queries that automatically monitor pg_stat_activity.waiting, and automatically go to pg_locks to understand what's going on, if there's one. With the above definition, that got much harder. Not only do I have to write WHERE wait_event <> 'process not waiting', but then also parse the wait event name, to know whether the process is waiting on a heavyweight lock, or something else! I do think there's a considerable benefit in improving the instrumentation here, but his strikes me as making live more complex for more users than it makes it easier. At the very least this should be split into two fields (type & what we're actually waiting on). I also strongly suspect we shouldn't use in band signaling ("process not waiting"), but rather make the field NULL if we're not waiting on anything. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2016-01-05 10:48:43 -0500, Bruce Momjian wrote: > On Tue, Jan 5, 2016 at 04:42:24PM +0100, Andres Freund wrote: > > On 2016-01-05 10:40:13 -0500, Bruce Momjian wrote: > > > On Tue, Jan 5, 2016 at 04:31:15PM +0100, Andres Freund wrote: > > > > On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote: > > > > Yes? But it's ok sizewise on the common platforms? > > > > > > What is the uncommon part? I guess I missed that. > > > > http://archives.postgresql.org/message-id/20151212181702.GH17938%40alap3.anarazel.de > > Yes, I saw that, and the URL in the email, but what is the uncommon > case? Are you asking which platforms s_lock is larger than a char? If so, grep s_lock.h for typedefs. If not, I'm not following what you're asking for? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2016-01-05 10:40:13 -0500, Bruce Momjian wrote: > On Tue, Jan 5, 2016 at 04:31:15PM +0100, Andres Freund wrote: > > On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote: > > Yes? But it's ok sizewise on the common platforms? > > What is the uncommon part? I guess I missed that. http://archives.postgresql.org/message-id/20151212181702.GH17938%40alap3.anarazel.de -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2016-01-05 10:28:25 -0500, Bruce Momjian wrote: > On Sun, Dec 13, 2015 at 12:35:34PM +0100, Andres Freund wrote: > > > > One thing to call out is that an "oversized" s_lock can now make > > > > BufferDesc exceed 64 bytes, right now that's just the case when it's > > > > larger than 4 bytes. I'm not sure if that's cause for real concern, > > > > given that it's not very concurrent or ancient platforms where that's > > > > the case. > > > > http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de > > > > would alleviate that concern again, as it collapses flags, usage_count, > > > > buf_hdr_lock and refcount into one 32 bit int... > > > > > > I don't think that would be worth worrying about even if we didn't > > > have a plan in mind that would make it go away again, and even less so > > > given that we do have such a plan. > > > > Ok cool. I'm not particularly concerned either, just didn't want to slip > > that in without having it called out. > > Uh, didn't you and I work in 9.5 to make sure the BufferDesc was 64-byte > aligned to avoid double-CPU cache invalidation that was causing > performance problems on a server you were testing? Yes? But it's ok sizewise on the common platforms? Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-12-12 21:15:52 -0500, Robert Haas wrote: > On Sat, Dec 12, 2015 at 1:17 PM, and...@anarazel.de <and...@anarazel.de> > wrote: > > Here's two patches doing that. The first is an adaption of your > > constants patch, using an enum and also converting xlog.c's locks. The > > second is the separation into distinct tranches. > > Personally, I prefer the #define approach to the enum, but I can live > with doing it this way. I think the lack needing to adjust the 'last defined' var is worth it... > Other than that, I think these patches look > good, although if it's OK with you I would like to make a pass over > the comments and the commit messages which seem to me that they could > benefit from a bit of editing (but not much substantive change). Sounds good to me. You'll then commit that? > > One thing to call out is that an "oversized" s_lock can now make > > BufferDesc exceed 64 bytes, right now that's just the case when it's > > larger than 4 bytes. I'm not sure if that's cause for real concern, > > given that it's not very concurrent or ancient platforms where that's > > the case. > > http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de > > would alleviate that concern again, as it collapses flags, usage_count, > > buf_hdr_lock and refcount into one 32 bit int... > > I don't think that would be worth worrying about even if we didn't > have a plan in mind that would make it go away again, and even less so > given that we do have such a plan. Ok cool. I'm not particularly concerned either, just didn't want to slip that in without having it called out. Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-11-15 16:24:24 -0500, Robert Haas wrote: > I think what we should do about the buffer locks is polish up this > patch and get it applied: > > http://www.postgresql.org/message-id/20150907175909.gd5...@alap3.anarazel.de > > I think it needs to be adapted to use predefined constants for the > tranche IDs instead of hardcoded values, maybe based on the attached > tranche-constants.patch. Here's two patches doing that. The first is an adaption of your constants patch, using an enum and also converting xlog.c's locks. The second is the separation into distinct tranches. One thing to call out is that an "oversized" s_lock can now make BufferDesc exceed 64 bytes, right now that's just the case when it's larger than 4 bytes. I'm not sure if that's cause for real concern, given that it's not very concurrent or ancient platforms where that's the case. http://archives.postgresql.org/message-id/20150915020625.GI9666%40alap3.anarazel.de would alleviate that concern again, as it collapses flags, usage_count, buf_hdr_lock and refcount into one 32 bit int... Greetings, Andres Freund >From 5f10d977f285109df16bfef6b79456564251aa54 Mon Sep 17 00:00:00 2001 From: Andres Freund <and...@anarazel.de> Date: Sat, 12 Dec 2015 18:41:59 +0100 Subject: [PATCH 1/2] Define enum of builtin LWLock tranches. It's a bit cumbersome having to allocate tranche IDs with LWLockNewTrancheId() because the returned value needs to be shared between backends, which all need to call LWLockRegisterTranche(), somehow. For builtin tranches we can easily do better, and simple pre-define tranche IDs for those. This is motivated by an upcoming patch adding further builtin tranches. Author: Robert Haas and Andres Freund Discussion: CA+TgmoYciHS4FuU2rYNt8bX0+7ZcNRBGeO-L20apcXTeo7=4...@mail.gmail.com --- src/backend/access/transam/xlog.c | 10 +++--- src/backend/storage/lmgr/lwlock.c | 7 --- src/include/storage/lwlock.h | 11 +++ 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c index 71fc8ff..147fd53 100644 --- a/src/backend/access/transam/xlog.c +++ b/src/backend/access/transam/xlog.c @@ -512,7 +512,6 @@ typedef struct XLogCtlInsert */ WALInsertLockPadded *WALInsertLocks; LWLockTranche WALInsertLockTranche; - int WALInsertLockTrancheId; } XLogCtlInsert; /* @@ -4653,7 +4652,7 @@ XLOGShmemInit(void) /* Initialize local copy of WALInsertLocks and register the tranche */ WALInsertLocks = XLogCtl->Insert.WALInsertLocks; - LWLockRegisterTranche(XLogCtl->Insert.WALInsertLockTrancheId, + LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, >Insert.WALInsertLockTranche); return; } @@ -4677,17 +4676,14 @@ XLOGShmemInit(void) (WALInsertLockPadded *) allocptr; allocptr += sizeof(WALInsertLockPadded) * NUM_XLOGINSERT_LOCKS; - XLogCtl->Insert.WALInsertLockTrancheId = LWLockNewTrancheId(); - XLogCtl->Insert.WALInsertLockTranche.name = "WALInsertLocks"; XLogCtl->Insert.WALInsertLockTranche.array_base = WALInsertLocks; XLogCtl->Insert.WALInsertLockTranche.array_stride = sizeof(WALInsertLockPadded); - LWLockRegisterTranche(XLogCtl->Insert.WALInsertLockTrancheId, >Insert.WALInsertLockTranche); + LWLockRegisterTranche(LWTRANCHE_WAL_INSERT, >Insert.WALInsertLockTranche); for (i = 0; i < NUM_XLOGINSERT_LOCKS; i++) { - LWLockInitialize([i].l.lock, - XLogCtl->Insert.WALInsertLockTrancheId); + LWLockInitialize([i].l.lock, LWTRANCHE_WAL_INSERT); WALInsertLocks[i].l.insertingAt = InvalidXLogRecPtr; } diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c index b13ebc6..84691df 100644 --- a/src/backend/storage/lmgr/lwlock.c +++ b/src/backend/storage/lmgr/lwlock.c @@ -445,7 +445,7 @@ CreateLWLocks(void) /* Initialize all LWLocks in main array */ for (id = 0, lock = MainLWLockArray; id < numLocks; id++, lock++) - LWLockInitialize(>lock, 0); + LWLockInitialize(>lock, LWTRANCHE_MAIN); /* * Initialize the dynamic-allocation counters, which are stored just @@ -457,7 +457,7 @@ CreateLWLocks(void) LWLockCounter = (int *) ((char *) MainLWLockArray - 3 * sizeof(int)); LWLockCounter[0] = NUM_FIXED_LWLOCKS; LWLockCounter[1] = numLocks; - LWLockCounter[2] = 1; /* 0 is the main array */ + LWLockCounter[2] = LWTRANCHE_FIRST_USER_DEFINED; } if (LWLockTrancheArray == NULL) @@ -466,12 +466,13 @@ CreateLWLocks(void) LWLockTrancheArray = (LWLockTranche **) MemoryContextAlloc(TopMemoryContext, LWLockTranchesAllocated * sizeof(LWLockTranche *)); + Assert(LWLockTranchesAllocated >= LWTRANCHE_FIRST_USER_DEFINED); } MainLWLockTranche.name = "main"; MainLWLockTranche.array_base = MainLWLockArray; MainLWLockTranche.array_stride = sizeof(LWLockPadded); - LWLockRegisterTranche(0, ); + LWLockRegisterTranche(LWTRANCHE_MA
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-11-17 14:14:50 +0300, Ildus Kurbangaliev wrote: > 1) We can avoid constants, and use a standard steps for tranches > creation. The constants are actually a bit useful, to easily determine builtin/non-builtin stuff. > 2) We have only one global variable (BufferCtl) Don't see the advantage. This adds another, albeit predictable, indirection in frequent callsites. There's no architectural advantage in avoiding these. > 3) Tranches moved to shared memory, so we won't need to do > an additional work here. I can see some benefit, but it also doesn't seem like a huge benefit. > 4) Also we can kept buffer locks from MainLWLockArray there (that was done > in attached patch). That's the 'blockLocks' thing? That's a pretty bad name. These are buffer mapping locks. And this seems like a separate patch, separately debated. > + if (!foundCtl) > { > - int i; > + /* Align descriptors to a cacheline boundary. */ > + ctl->base.bufferDescriptors = (void *) > CACHELINEALIGN(ShmemAlloc( > + NBuffers * sizeof(BufferDescPadded) + > PG_CACHE_LINE_SIZE)); > + > + ctl->base.bufferBlocks = (char *) ShmemAlloc(NBuffers * (Size) > BLCKSZ); I'm going to entirely veto this. > + strncpy(ctl->IOLWLockTrancheName, "buffer_io", > + BUFMGR_MAX_NAME_LENGTH); > + strncpy(ctl->contentLWLockTrancheName, "buffer_content", > + BUFMGR_MAX_NAME_LENGTH); > + strncpy(ctl->blockLWLockTrancheName, "buffer_blocks", > + BUFMGR_MAX_NAME_LENGTH); > + > + ctl->IOLocks = (LWLockMinimallyPadded *) ShmemAlloc( > + sizeof(LWLockMinimallyPadded) * NBuffers); This should be cacheline aligned. > - buf->io_in_progress_lock = LWLockAssign(); > - buf->content_lock = LWLockAssign(); > + LWLockInitialize(BufferDescriptorGetContentLock(buf), > + ctl->contentLWLockTrancheId); > + LWLockInitialize(>IOLocks[i].lock, > + ctl->IOLWLockTrancheId); Seems weird to use the macro accessing content locks, but not IO locks. > /* Note: these two macros only work on shared buffers, not local ones! */ > -#define BufHdrGetBlock(bufHdr) ((Block) (BufferBlocks + ((Size) > (bufHdr)->buf_id) * BLCKSZ)) > +#define BufHdrGetBlock(bufHdr) ((Block) (BufferCtl->bufferBlocks + > ((Size) (bufHdr)->buf_id) * BLCKSZ)) That's the additional indirection I'm talking about. > @@ -353,9 +353,6 @@ NumLWLocks(void) > /* Predefined LWLocks */ > numLocks = NUM_FIXED_LWLOCKS; > > - /* bufmgr.c needs two for each shared buffer */ > - numLocks += 2 * NBuffers; > - > /* proc.c needs one for each backend or auxiliary process */ > numLocks += MaxBackends + NUM_AUXILIARY_PROCS; Didn't you also move the buffer mapping locks... ? > diff --git a/src/include/storage/buf_internals.h > b/src/include/storage/buf_internals.h > index 521ee1c..e1795dc 100644 > --- a/src/include/storage/buf_internals.h > +++ b/src/include/storage/buf_internals.h > @@ -95,6 +95,7 @@ typedef struct buftag > (a).forkNum == (b).forkNum \ > ) > > + > /* unrelated change. > * The shared buffer mapping table is partitioned to reduce contention. > * To determine which partition lock a given tag requires, compute the tag's > @@ -104,10 +105,9 @@ typedef struct buftag > #define BufTableHashPartition(hashcode) \ > ((hashcode) % NUM_BUFFER_PARTITIONS) > #define BufMappingPartitionLock(hashcode) \ > - ([BUFFER_MAPPING_LWLOCK_OFFSET + \ > - BufTableHashPartition(hashcode)].lock) > + (&((BufferCtlData > *)BufferCtl)->blockLocks[BufTableHashPartition(hashcode)].lock) > #define BufMappingPartitionLockByIndex(i) \ > - ([BUFFER_MAPPING_LWLOCK_OFFSET + (i)].lock) > + (&((BufferCtlData *)BufferCtl)->blockLocks[i].lock) Again, unnecessary indirections. > +/* Maximum length of a bufmgr lock name */ > +#define BUFMGR_MAX_NAME_LENGTH 32 If we were to do this I'd just use NAMEDATALEN. > +/* > + * Base control struct for the buffer manager data > + * Located in shared memory. > + * > + * BufferCtl variable points to BufferCtlBase because of only > + * the backend code knows about BufferDescPadded, LWLock and > + * others structures and the same time it must be usable in > + * the frontend code. > + */ > +typedef struct BufferCtlData > +{ > + BufferCtlBase base; Eeek. What's the point here? Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-11-15 16:24:24 -0500, Robert Haas wrote: > I think it needs to be adapted to use predefined constants for the > tranche IDs instead of hardcoded values, maybe based on the attached > tranche-constants.patch. Yea, that part is clearly not ok. Let me look at the patch. > Also, I think we should rip all the volatile qualifiers out of > bufmgr.c, using the second attached patch, devolatileize-bufmgr.patch. > The comment claiming that we need this because spinlocks aren't > compiler barriers is obsolete. Same here. > @@ -457,7 +457,7 @@ CreateLWLocks(void) > LWLockCounter = (int *) ((char *) MainLWLockArray - 3 * > sizeof(int)); > LWLockCounter[0] = NUM_FIXED_LWLOCKS; > LWLockCounter[1] = numLocks; > - LWLockCounter[2] = 1; /* 0 is the main array */ > + LWLockCounter[2] = LWTRANCHE_LAST_BUILTIN_ID + 1; > } Man this LWLockCounter[0] stuff is just awful. Absolutely nothing that needs to be fixed here, but here it really should be fixed someday. > /* > + * We reserve a few predefined tranche IDs. These values will never be > + * returned by LWLockNewTrancheId. > + */ > +#define LWTRANCHE_MAIN 0 > +#define LWTRANCHE_BUFFER_CONTENT 1 > +#define LWTRANCHE_BUFFER_IO_IN_PROGRESS 2 > +#define LWTRANCHE_LAST_BUILTIN_ID > LWTRANCHE_BUFFER_IO_IN_PROGRESS Nitpick: I'm inclined to use an enum to avoid having to adjust the last builtin id when adding a new builtin tranche. Besides that minor thing I think this works for me. We might independently want something making adding non-builtin tranches easier, but that's really just independent. > From 9e7f9219b5e752da46be0e26a0be074191ae8f62 Mon Sep 17 00:00:00 2001 > From: Robert Haas> Date: Sun, 15 Nov 2015 10:24:03 -0500 > Subject: [PATCH 1/3] De-volatile-ize. I very strongly think this should be done. It's painful having to cast-away volatile, and it de-optimizes a fair bit of performance relevant code. > /* local state for StartBufferIO and related functions */ > /* local state for StartBufferIO and related functions */ > -static volatile BufferDesc *InProgressBuf = NULL; > +static BufferDesc *InProgressBuf = NULL; > static bool IsForInput; > > /* local state for LockBufferForCleanup */ > -static volatile BufferDesc *PinCountWaitBuf = NULL; > +static BufferDesc *PinCountWaitBuf = NULL; Hm. These could also be relevant for sigsetjmp et al. Haven't found relevant code tho. > - volatile BufferDesc *bufHdr; > + BufferDesc *bufHdr; > Block bufBlock; Looks mis-indented now, similarly in a bunch of other places. Maybe pg-indent afterwards? > /* >* Header spinlock is enough to examine BM_DIRTY, see comment in > @@ -1707,7 +1707,7 @@ BufferSync(int flags) > num_written = 0; > while (num_to_scan-- > 0) > { > - volatile BufferDesc *bufHdr = GetBufferDescriptor(buf_id); > + BufferDesc *bufHdr = GetBufferDescriptor(buf_id); > This case has some theoretical behavioural implications: As bufHdr->flags is accessed without a spinlock the compiler might re-use an older value. But that's ok: a) there's no old value it really could use b) the whole race-condition exists anyway, and the comment in the body explains why that's ok. > BlockNumber > BufferGetBlockNumber(Buffer buffer) > { > - volatile BufferDesc *bufHdr; > + BufferDesc *bufHdr; > > Assert(BufferIsPinned(buffer)); > > @@ -2346,7 +2346,7 @@ void > BufferGetTag(Buffer buffer, RelFileNode *rnode, ForkNumber *forknum, >BlockNumber *blknum) > { > - volatile BufferDesc *bufHdr; > + BufferDesc *bufHdr; > /* Do the same checks as BufferGetBlockNumber. */ > Assert(BufferIsPinned(buffer)); > @@ -2382,7 +2382,7 @@ BufferGetTag(Buffer buffer, RelFileNode *rnode, > ForkNumber *forknum, > * as the second parameter. If not, pass NULL. > */ > static void > -FlushBuffer(volatile BufferDesc *buf, SMgrRelation reln) > +FlushBuffer(BufferDesc *buf, SMgrRelation reln) > { > XLogRecPtr recptr; > ErrorContextCallback errcallback; > @@ -2520,7 +2520,7 @@ RelationGetNumberOfBlocksInFork(Relation relation, > ForkNumber forkNum) > bool > BufferIsPermanent(Buffer buffer) > { > - volatile BufferDesc *bufHdr; > + BufferDesc *bufHdr; These require that the buffer is pinned (a barrier implying operation). The pin prevents the contents from changing in a relevant manner, the barrier implied in the pin forces the core's view to be non-stale. > @@ -2613,7 +2613,7 @@ DropRelFileNodeBuffers(RelFileNodeBackend rnode, > ForkNumber forkNum, > > for (i = 0; i < NBuffers; i++) > { > - volatile BufferDesc *bufHdr = GetBufferDescriptor(i); > + BufferDesc *bufHdr = GetBufferDescriptor(i); > /* >
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-09-15 14:39:51 -0400, Robert Haas wrote: > We could, but since that would be strictly more annoying and less > flexible than what we've already got, why would we? I don't find the current approach of having to define tranches in every backend all that convenient. It also completely breaks down if you want to display locks from tranches that are only visible within a subset of the backends - not that unlikely given that shared_preload_libraries is a PITA. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
On 2015-09-06 22:57:04 +0300, Ildus Kurbangaliev wrote: > Ok, I've kept only one tranche for individual LWLocks But you've added the lock names as a statically sized array to all tranches? Why not just a pointer to an array that's either NULL ('not individualy named locks') or appropriately sized? > > I don't really like the tranche model as in the patch right now. I'd > > rather have in a way that we have one tranch for all the individual > > lwlocks, where the tranche points to an array of names alongside the > > tranche's name. And then for the others we just supply the tranche name, > > but leave the name array empty, whereas a name can be generated. > > Maybe I don't understand something here, but why add extra field to all > tranches > if we need only one array (especially the array for individual LWLocks). It's cheap to add an optional pointer field to an individual tranche. It'll be far less cheap to extend the max number of tranches. But it's also just ugly to to use a tranche per lock - they're intended to describe 'runs' of locks. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Refactoring of LWLock tranches
Hi, On 2015-09-05 12:48:12 +0300, Ildus Kurbangaliev wrote: > Another parts require a some discussion so I didn't touch them yet. At this point I don't see any point in further review until these are addressed. > The idea to create an individual tranches for individual LWLocks have > come from Heikki Linnakangas and I also think that tranche is a good place to > keep > LWLock name. I think it's rather ugly. > Base of these tranches points to MainLWLockArray And that's just plain wrong. The base of a tranche ought to point to the first lwlock in it. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] RFC: replace pg_stat_activity.waiting with something more descriptive
On 2015-08-04 23:37:08 +0300, Ildus Kurbangaliev wrote: > diff --git a/src/backend/access/transam/clog.c > b/src/backend/access/transam/clog.c > index 3a58f1e..10c25cf 100644 > --- a/src/backend/access/transam/clog.c > +++ b/src/backend/access/transam/clog.c > @@ -457,7 +457,8 @@ CLOGShmemInit(void) > { > ClogCtl->PagePrecedes = CLOGPagePrecedes; > SimpleLruInit(ClogCtl, "CLOG Ctl", CLOGShmemBuffers(), > CLOG_LSNS_PER_PAGE, > - CLogControlLock, "pg_clog"); > + CLogControlLock, "pg_clog", > + "CLogBufferLocks"); > } I'd rather just add the name "clog" (etc) as a string once to SimpleLruInit instead of now four 3 times. > +/* Lock names. For monitoring purposes */ > +const char *LOCK_NAMES[] = > +{ > + "Relation", > + "RelationExtend", > + "Page", > + "Tuple", > + "Transaction", > + "VirtualTransaction", > + "SpeculativeToken", > + "Object", > + "Userlock", > + "Advisory" > +}; Why do we need this rather than DescribeLockTag? > + /* Create tranches for individual LWLocks */ > + for (i = 0; i < NUM_INDIVIDUAL_LWLOCKS; i++, tranche++) > + { > + int id = LWLockNewTrancheId(); > + > + /* > + * We need to be sure that generated id is equal to > index > + * for individual LWLocks > + */ > + Assert(id == i); > + > + tranche->array_base = MainLWLockArray; > + tranche->array_stride = sizeof(LWLockPadded); > + MemSet(tranche->name, 0, LWLOCK_MAX_TRANCHE_NAME); > + > + /* Initialize individual LWLock */ > + LWLockInitialize([i].lock, id); > + > + /* Register new tranche in tranches array */ > + LWLockRegisterTranche(id, tranche); > + } > + > + /* Fill individual LWLock names */ > + InitLWLockNames(); Why a new tranche for each of these? And it can't be correct that each has the same base? I don't really like the tranche model as in the patch right now. I'd rather have in a way that we have one tranch for all the individual lwlocks, where the tranche points to an array of names alongside the tranche's name. And then for the others we just supply the tranche name, but leave the name array empty, whereas a name can be generated. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: On 2013-04-25 13:17:31 -0400, Tom Lane wrote: Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? Except that it still will unnecessarily wait for other CICs, just not deadlock, I don't see a problem. We could have a PROC_IN_CIC flag or something so we can ignore other index creations, but I am not sure if its worth the complication. I'm not sure it's a good idea to ignore other CICs altogether --- they could be executing user-defined index functions that do strange things like consult other tables. Since this seems to me to be a bit outside the intended use-case for CIC anyway, I think it's good enough if they just don't deadlock Fine with me, especially as nobody seems to have complained so far other than the OP, so it doesn't seem to be to common. I don't have access to the code ATM an I wonder whether DROP CONCURRENTLY has a similar problem? Depends a bit on how the waiting is done... Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why are JSON extraction functions STABLE and not IMMUTABLE?
Andrew Dunstan and...@dunslane.net schrieb: On 04/15/2013 11:46 AM, Andres Freund wrote: Me either. It's an oversight, really. Unless there is any objection I'll change them toot sweet. What about the existing (as of 9.2) functions? ISTM json_in, out, recv, send should also be immutable. array_to_json, row_to_json et all can't be tho. OK, although these have been like this since 9.2. I'm not sure why json_out is immutable but json_in isn't. Does changing these require a catalog version bump? Well, you could get away without one since a more permissive value should only influence performance and not correctness. But there doesn't yet seem much reason to avoid it that much yet. It could cause confusion for someone at some point. Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Ignore invalid indexes in pg_dump.
Tom Lane t...@sss.pgh.pa.us schrieb: Bruce Momjian br...@momjian.us writes: Should I just patch pg_upgrade to remove the indisvalid, skip indisvalid indexes, and backpatch it? Users should be using the version of pg_upgrade to match new pg_dump. Is there any case where they don't match? Do I still need to check for indisready? Yeah, if you can just ignore !indisvalid indexes that should work fine. I see no need to look at indisready if you're doing that. You need to look at inisready in 9.2 since thats used for about to be dropped indexes. No? Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Ignore invalid indexes in pg_dump.
Tom Lane t...@sss.pgh.pa.us schrieb: anara...@anarazel.de and...@anarazel.de writes: Tom Lane t...@sss.pgh.pa.us schrieb: Yeah, if you can just ignore !indisvalid indexes that should work fine. I see no need to look at indisready if you're doing that. You need to look at inisready in 9.2 since thats used for about to be dropped indexes. No? No, he doesn't need to look at indisready/indislive; if either of those flags are off then indisvalid should certainly be off too. (If it isn't, queries against the table are already in trouble.) 9.2 represents inisdead as live !ready, doesn't it? So just looking at indislive will include about to be dropped or partially dropped indexes? Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] REFRESH MATERIALIZED VIEW locklevel
Kevin Grittner kgri...@ymail.com schrieb: Andres Freund and...@2ndquadrant.com wrote: if I understand things correctly REFRESH MATERIALIZED VIEW locks the materialized view with an AcessExclusiveLock even if the view already contains data. Yeah. At the time I had to make a decision on that, REINDEX CONCURRENTLY did not seem reliable with a weaker lock, and REFRESH MATERIALIZED VIEW has to rebuild indexes (among other things). If we have all the issues sorted out with REINDEX CONCURRENTLY then the same techniques will probably apply to RMV without too much difficulty. It's a bit late to think about that for 9.3, though. I am pretty sure that will - understandably - confuse users, so I vote for at least including a note about that in the docs. Will see about working that in. In the ride home I realized that unless - not that unlikely - you thought about something I didtn't REFRESH will behave similar to TRUNCATE for repeatable read+ transactions that only access it after REFRESH finished. That is, they will appear empty. If that's the case, it needs to be documented prominently as well. It would be possible to get different behaviour by immediately freezing all tuples, but that would also result in violations of visibility by showing tuples that are not yet visible. Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Support for REINDEX CONCURRENTLY
Hi, Michael Paquier michael.paqu...@gmail.com schrieb: Andres, Masao, do you need an extra round or review or do you think this is ready to be marked as committer? On my side I have nothing more to add to the existing patches. I think they do need review before that - I won't be able to do another review before the weekend though. Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] overlapping strncpy/memcpy errors via valgrind
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: On 2013-02-17 15:10:35 +, Greg Stark wrote: Peter G is sitting near me and reminded me that this issue came up in the past. Iirc the conclusion then is that we're calling memcpy where the source and destination pointers are sometimes identical. Tom decided there was really no realistic architecture where that wouldn't work. I am not so convinced that that is safe if libc turns that into some optimized string instructions or even PCMPSTR... What would you envision happening that would be bad? Afair some of the optimized instructions (like movdqa) don't necessarily work if source an target are in the same location. Not sure about it bit I wouldn't want to depend on it. Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] overlapping strncpy/memcpy errors via valgrind
Tom Lane t...@sss.pgh.pa.us schrieb: Boszormenyi Zoltan z...@cybertec.at writes: Then, why isn't memcpy() skipped if the source and dest are the same? It would be a micro-optimization but a valid one. No, it'd be more like a micro-pessimization, because the test would be wasted effort in the vast majority of calls. The *only* reason to do this would be to shut up valgrind, and that seems annoying. I wonder if anyone's tried filing a bug against valgrind to say that it shouldn't complain about this case. You already need a suppression file to use valgrind sensibly, its easy enough to add it there. Perhaps we should add one to the tree? --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] overlapping strncpy/memcpy errors via valgrind
Peter Geoghegan peter.geoghega...@gmail.com schrieb: On 17 February 2013 18:52, anara...@anarazel.de and...@anarazel.de wrote: You already need a suppression file to use valgrind sensibly, its easy enough to add it there. Perhaps we should add one to the tree? Perhaps you should take the time to submit a proper Valgrind patch first. The current approach of applying the patch that Noah Misch originally wrote (but did not publicly submit, iirc) on an ad-hoc basis isn't great. That is what you've done here, right? What patch are you talking about? I have no knowledge about any pending valgrind patches except one I made upstream apply to make pg inside valgrind work on amd64. Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical changeset generation v4
Josh Berkus j...@agliodbs.com schrieb: Andreas, Is there a git fork for logical replication somewhere? Check the bottom of the email ;) --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [PATCH 1/2] Provide a common malloc wrappers and palloc et al. emulation for frontend'ish environs
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: On 2013-01-09 11:27:46 -0500, Tom Lane wrote: I'd prefer posting a single message with the discussion and the patch(es). If you think it's helpful to split a patch into separate parts for reviewing, add multiple attachments. But my experience is that such separation isn't nearly as useful as you seem to think. Well, would it have been better if xlog reading, ilist, binaryheap, this cleanup, etc. have been in the same patch? They have originated out of the same work... Even the splitup in this thread seems to have helped as youve jumped on the patches where you could give rather quick input (static relpathbackend(), central Assert definitions), probably without having read the xlogreader patch itself... No, I agree that global-impact things like this palloc rearrangement are much better proposed and debated separately than as part of something like xlogreader. What I was reacting to was the specific patch set associated with this thread. I don't see the point of breaking out a two-line sub-patch such as you did in http://archives.postgresql.org/message-id/1357730830-25999-3-git-send-email-and...@2ndquadrant.com Ah, yes. I See your point. The not all that good reasoning I had in mind was that that one should be uncontroversial as it seemed to be the only unchecked malloc call in src/bin. So it could be committed independent from the more controversial stuff... Same with the single whitespace removal patch upthread... Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] enhanced error fields
Robert Haas robertmh...@gmail.com schrieb: On Sat, Dec 29, 2012 at 4:30 PM, Peter Geoghegan pe...@2ndquadrant.com wrote: Ascertaining the identity of the object in question perfectly unambiguously, so that you can safely do something like lookup a comment on the object, seems like something way beyond what I'd envisioned for this feature. Why should the comment be useful in an error handler anyway? At best, that seems like a nice-to-have extra to me. The vast majority are not even going to think about the ambiguity that may exist. They'll just write: if (constraint_name == upc) MessageBox(That is not a valid barcode.); The people who are content to do that don't need this patch at all. They can just apply a regexp to the message that comes back from the server and then set constraint_name based on what pops out of the regex. And then do just what you did there. Easier said than done if you're dealing with pg installations with different lc_messages... Andres --- Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] fix bgworkers in EXEC_BACKEND
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: I am still worried about the following scenario in the EXEC_BACKEND case: 1) postmaster starts 2) reads config 3) executes _PG_init for shared_preload_libraries 4) library 'abc' gets config value 'abc.num_workers = 2' and registers as many workers 5) some time goes by, workers, backends start 6) abc.num_workers gets changed to 3, SIGHUP This is broken whether it's EXEC_BACKEND or not: you don't get to change anything that determines the number of workers post-startup. num_workers should have been declared PGC_POSTMASTER. BTW, I think it happens not to be broken in the non EXEC_BACKEND case because the registration point for bgworkers is _PG_init which will only be called once without EXEC_BACKEND, so config changes to SIGHUP'able variables won't change a thing dangerous as the bgworker stuff is all set up. Andres --- Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] buffer assertion tripping under repeat pgbench load
Tom Lane t...@sss.pgh.pa.us schrieb: Greg Smith g...@2ndquadrant.com writes: To try and speed up replicating this problem I switched to a smaller database scale, 100, and I was able to get a crash there. Here's the latest: 2012-12-26 00:01:19 EST [2278]: WARNING: refcount of base/16384/57610 blockNum=118571, flags=0x106 is 1073741824 should be 0, globally: 0 2012-12-26 00:01:19 EST [2278]: WARNING: buffers with non-zero refcount is 1 TRAP: FailedAssertion(!(RefCountErrors == 0), File: bufmgr.c, Line: 1720) That's the same weird 1073741824 count as before. I was planning to dump some index info, but then I saw this: $ psql -d pgbench -c select relname,relkind,relfilenode from pg_class where relfilenode=57610 relname | relkind | relfilenode --+-+- pgbench_accounts | r | 57610 Making me think this isn't isolated to being an index problem. Yeah, that destroys my theory that there's something broken about index management specifically. Now we're looking for something that can affect any buffer's refcount, which more than likely means it has nothing to do with the buffer's contents ... I tried to soldier on with pg_filedump anyway. It looks like the last version I saw there (9.2.0 from November) doesn't compile anymore: Meh, looks like it needs fixes for Heikki's int64-xlogrecoff patch. I haven't gotten around to doing that yet, but would gladly take a patch if anyone wants to do it. However, I now doubt that examining the buffer content will help much on this problem. Now that we know the bug's reproducible on smaller instances, could you put together an exact description of what you're doing to trigger it? What is the DB configuration, pgbench parameters, etc? Also, it'd be worthwhile to just repeat the test a few more times to see if there's any sort of pattern in which buffers get affected. I'm now suspicious that it might not always be just one buffer, for example. I don't think its necessarily only one buffer - if I read the above output correctly Greg used the suggested debug output which just put the elog(WARN) before the Assert... Greg, could you output all bad buffers and only assert after the loop if there was at least one refcounted buffer? Andres --- Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logical decoding - GetOldestXmin
Hi, Robert Haas robertmh...@gmail.com schrieb: On Fri, Dec 14, 2012 at 7:19 PM, Andres Freund and...@2ndquadrant.com wrote: On 2012-12-14 14:01:30 -0500, Robert Haas wrote: On Fri, Dec 14, 2012 at 6:46 AM, Andres Freund and...@2ndquadrant.com wrote: Just moving that tidbit inside the lock seems to be the pragmatic choice. GetOldestXmin is called * once per checkpoint * one per index build * once in analyze * twice per vacuum * once for HS feedback messages Nothing of that occurs frequently enough that 5 instructions will make a difference. I would be happy to go an alternative path, but right now I don't see any nice one. A already_locked parameter to GetOldestXmin seems to be a cure worse than the disease. I'm not sure that would be so bad, but I guess I question the need to do it this way at all. Most of the time, if you need to advertise your global xmin, you use GetSnapshotData(), not GetOldestXmin(), and I guess I'm not seeing why that wouldn't also work here. Am I dumb? I wondered upthread whether that would be better: On 2012-12-13 21:03:44 +0100, Andres Freund wrote: Another alternative to this would be to get a snapshot with GetSnapshotData(), copy the xmin to the logical slot, then call ProcArrayEndTransaction(). But that doesn't really seem to be nicer to me. Not sure why I considered it ugly anymore, but it actually has a noticeable disadvantage. GetOldestXmin is nicer is than GetSnapshotData as the latter set a fairly new xid as xmin whereas GetOldestXmin returns the actual current xmin horizon. Thats preferrable because it allows us to start up more quickly. snapbuild.c can only start building a snapshot once it has seen a xl_running_xact with oldestRunningXid = own_xmin. Otherwise we cannot be sure that no relevant catalog tuples have been removed. I'm a bit confused. Are you talking about the difference between RecentGlobalXmin and RecentXmin? I think GetSnapshotData() updates both. The problem is that at the time GetSnapshotData returns the xmin horizon might have gone upwards and tuples required for decoding might get removed by other backends. That needs to be prevented while holding the procarray lock exclusively. Does it make more sense now? Andres --- Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: I don't find that a convincing comparison. Normally don't need to shutdown the server between two pg_dump commands. Which very well might be scripted. Especially as for now, without a background writer/checkpointer writing stuff beforehand, the shutdown checkpoint won't be fast. IO isn't unlikely if youre doing a pg_dump because of hint bits... I still think this is a straw-man argument. There is no expectation that a standalone PG implementation would provide performance for a series of standalone sessions that is equivalent to what you'd get from a persistent server. If that scenario is what's important to you, you'd use a persistent server. The case where this sort of thing would be interesting is where minimizing administration complexity (by not having a server) is more important than performance. People currently use, eg, SQLite for that type of application, and it's not because of performance. I am not saying its bad that it is slower, that's absolutely OK. Just that it will take a variable amount of time till you can run pgdump again and its not easily detectable without looping and trying again. Andres --- Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
anara...@anarazel.de and...@anarazel.de schrieb: Thom Brown t...@linux.com schrieb: On 2 March 2012 23:33, Thom Brown t...@linux.com wrote: On 2 March 2012 22:32, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: test=# CREATE TABLE badname (id int, a int, b text); ERROR: invalid relation name: badname test=# CREATE TABLE badname AS SELECT 1::int id, 1::int a, ''::text b; SELECT 1 This doesn't even get picked up by ANY COMMAND. I think Andres should add an entry for his patch on the commitfest. Is it ok? I'll try Andres' patch this weekend while I test yours. Actually no matter which patch I apply first, they cause the other to fail to apply. I will try to fix that on the plain tomorrow (to NYC) but I am not yet sure when/where I will have internet access again. One more try: To the list. Andres Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@anarazel.de writes: I refreshed the patch so it works again on current HEAD. Basically some trivial fixes and dfd26f9c5f371437f243249025863ea9911aacaa. The latter doesn't seem necessary to me after the changes, so I simply ditched it. Am I missing something? No, that was only needed because execMain.c was overriding somebody else's tuple receiver. If you're putting the right receiver into the QueryDesc to start with, it shouldn't be necessary. I'm confused by this though: This basically includes a revert of d8fb1f9adbddd1eef123d66a89a9fc0ecd96f60b I don't find that commit ID anywhere. That should have been the aforementioned commit. Must have screwed up the copypaste buffer. Sorry for that. Andres Please excuse the brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] testing ProcArrayLock patches
Kevin Grittner kevin.gritt...@wicourts.gov schrieb: Robert Haas robertmh...@gmail.com wrote: Any chance you can run oprofile (on either branch, don't really care) against the 32 client test and post the results? Besides the other changes we discussed, I boosted scale to 150 and ran at READ COMMITTED isolation level (because all threads promptly crashed and burned at REPEATABLE READ -- we desperately need a pgbench option to retry a transaction on serialization failure). The oprofile hot spots at half a percent or higher: CPU: Intel Core/i7, speed 2262 MHz (estimated) Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 10 samples %image name symbol name 9333944.9651 postgresAllocSetAlloc 8484764.5134 postgresbase_yyparse 7195153.8274 postgresSearchCatCache 4612752.4537 postgreshash_search_with_hash_value 4264112.2682 postgresGetSnapshotData 3229381.7178 postgresLWLockAcquire 3222361.7141 postgrescore_yylex 3054711.6249 postgresMemoryContextAllocZeroAligned 2815431.4976 postgresexpression_tree_walker 2702411.4375 postgresXLogInsert 2348991.2495 postgresMemoryContextAlloc 2101371.1178 postgresScanKeywordLookup 1848570.9833 postgresheap_page_prune 1736080.9235 postgreshash_any 1530110.8139 postgres_bt_compare 1445380.7689 postgresnocachegetattr 1314660.6993 postgresfmgr_info_cxt_security 1310010.6968 postgresgrouping_planner 1308080.6958 postgresLWLockRelease 1241120.6602 postgresPinBuffer 1207450.6423 postgresLockAcquireExtended 1129920.6010 postgresExecInitExpr 1128300.6002 postgreslappend 1123110.5974 postgresnew_list 1103680.5871 postgrescheck_stack_depth 1060360.5640 postgresAllocSetFree 1025650.5456 postgresMemoryContextAllocZero 94689 0.5037 postgresSearchSysCache That profile looks like you ran pgbench with -m simple. How does it look with prepared instead? Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable
Alex Goncharov alex-goncha...@comcast.net schrieb: ,--- You/Andres (Fri, 7 Oct 2011 02:28:30 +0200) * | a lot of cases where the database could deduce (quite easily) that a | result column cannot be null | Could you quickly explain what exactly you want that information for? Just | because it has been done before doesn't necessarily mean its a good idea... I am not writing a database application here (i.e. I am not storing the data). I am responding to a client requirement, basically: Given a SELECT (or possibly, simpler, a table name), tell me which columns are non-nullable? That doesnt explain why it's needed. To get community buyin into a feature the community - or at least parts of it - need to understand why its needed. Greetings, Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers