Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-16 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Yep, patch attached. I also changed xactGetCommittedChildren to return 
 the original array instead of copying it, as Alvaro suggested.

Applied with minor corrections (mostly comment fixes, but there were
a couple of real mistakes).

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-16 Thread Bruce Momjian

This has been applied by Tom.

---

Heikki Linnakangas wrote:
 Tom Lane wrote:
  Heikki Linnakangas [EMAIL PROTECTED] writes:
  Elsewhere in our codebase where we use arrays that are enlarged as 
  needed, we keep track of the allocated size and the used size of the 
  array separately, and only call repalloc when the array fills up, and 
  repalloc a larger than necessary array when it does. I chose to just 
  call repalloc every time instead, as repalloc is smart enough to fall 
  out quickly if the chunk the allocation was made in is already larger 
  than the new size. There might be some gain avoiding the repeated 
  repalloc calls, but I doubt it's worth the code complexity, and calling 
  repalloc with a larger than necessary size can actually force it to 
  unnecessarily allocate a new, larger chunk instead of reusing the old 
  one. Thoughts on that?
  
  Seems like a pretty bad idea to me, as the behavior you're counting on
  only applies to chunks up to 8K or thereabouts. 
 
 Oh, you're right. Though I'm sure libc realloc has all kinds of smarts 
 as well, it does seem better to not rely too much on that.
 
  In a situation where
  you are subcommitting lots of XIDs one at a time, this is likely to have
  quite awful behavior (or at least, you're at the mercy of the local
  malloc library as to how bad it is).  I'd go with the same
  double-it-each-time-needed approach we use elsewhere.
 
 Yep, patch attached. I also changed xactGetCommittedChildren to return 
 the original array instead of copying it, as Alvaro suggested.
 
 -- 
Heikki Linnakangas
EnterpriseDB   http://www.enterprisedb.com


 
 -- 
 Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-patches

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:
Elsewhere in our codebase where we use arrays that are enlarged as 
needed, we keep track of the allocated size and the used size of the 
array separately, and only call repalloc when the array fills up, and 
repalloc a larger than necessary array when it does. I chose to just 
call repalloc every time instead, as repalloc is smart enough to fall 
out quickly if the chunk the allocation was made in is already larger 
than the new size. There might be some gain avoiding the repeated 
repalloc calls, but I doubt it's worth the code complexity, and calling 
repalloc with a larger than necessary size can actually force it to 
unnecessarily allocate a new, larger chunk instead of reusing the old 
one. Thoughts on that?


Seems like a pretty bad idea to me, as the behavior you're counting on
only applies to chunks up to 8K or thereabouts. 


Oh, you're right. Though I'm sure libc realloc has all kinds of smarts 
as well, it does seem better to not rely too much on that.



In a situation where
you are subcommitting lots of XIDs one at a time, this is likely to have
quite awful behavior (or at least, you're at the mercy of the local
malloc library as to how bad it is).  I'd go with the same
double-it-each-time-needed approach we use elsewhere.


Yep, patch attached. I also changed xactGetCommittedChildren to return 
the original array instead of copying it, as Alvaro suggested.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/transam/twophase.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/twophase.c,v
retrieving revision 1.39
diff -c -r1.39 twophase.c
*** src/backend/access/transam/twophase.c	1 Jan 2008 19:45:48 -	1.39
--- src/backend/access/transam/twophase.c	12 Mar 2008 12:45:13 -
***
*** 827,833 
  		save_state_data(children, hdr.nsubxacts * sizeof(TransactionId));
  		/* While we have the child-xact data, stuff it in the gxact too */
  		GXactLoadSubxactData(gxact, hdr.nsubxacts, children);
- 		pfree(children);
  	}
  	if (hdr.ncommitrels  0)
  	{
--- 827,832 
Index: src/backend/access/transam/xact.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xact.c,v
retrieving revision 1.258
diff -c -r1.258 xact.c
*** src/backend/access/transam/xact.c	4 Mar 2008 19:54:06 -	1.258
--- src/backend/access/transam/xact.c	12 Mar 2008 13:41:10 -
***
*** 130,136 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	List	   *childXids;		/* subcommitted child XIDs */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
--- 130,138 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	TransactionId	*childXids;	/* subcommitted child XIDs, in XID order */
! 	int			nChildXids;		/* # of subcommitted child XIDs */
! 	int			maxChildXids;	/* allocated size of childXids */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
***
*** 156,162 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NIL,		/* subcommitted child Xids */
  	InvalidOid,	/* previous CurrentUserId setting */
  	false,		/* previous SecurityDefinerContext setting */
  	false,		/* entry-time xact r/o state */
--- 158,166 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NULL,		/* subcommitted child Xids */
! 	0,			/* # of subcommitted child Xids */
! 	0,			/* allocated size of childXids */
  	InvalidOid,	/* previous CurrentUserId setting */
  	false,		/* previous SecurityDefinerContext setting */
  	false,		/* entry-time xact r/o state */
***
*** 559,565 
  	 */
  	for (s = CurrentTransactionState; s != NULL; s = s-parent)
  	{
! 		ListCell   *cell;
  
  		if (s-state == TRANS_ABORT)
  			continue;
--- 563,569 
  	 */
  	for (s = CurrentTransactionState; s != NULL; s = s-parent)
  	{
! 		int low, high;
  
  		if (s-state == TRANS_ABORT)
  			continue;
***
*** 567,576 
  			continue;			/* it can't have any child XIDs either */
  		if (TransactionIdEquals(xid, s-transactionId))
  			return true;
! 		

Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


  Yep, patch attached. I also changed xactGetCommittedChildren to return
  the original array instead of copying it, as Alvaro suggested.


Any comments on the flag based approach I suggested earlier ? Am I
missing some normal scenario where it won't work well ?

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas
 [EMAIL PROTECTED] wrote:
 Yep, patch attached. I also changed xactGetCommittedChildren to return
 the original array instead of copying it, as Alvaro suggested.

 Any comments on the flag based approach I suggested earlier ?

I didn't like it; it seemed overly complicated (consider dealing with
XID wraparound), and it would have problems with a slow transaction
generating a sparse set of subtransaction XIDs.  I think getting rid of
the linear search will be enough to fix the performance problem.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:


  I didn't like it; it seemed overly complicated (consider dealing with
  XID wraparound),

We are talking about subtransactions here. I don't think we support
subtransaction wrap-around, do we ?

 and it would have problems with a slow transaction
  generating a sparse set of subtransaction XIDs.

I agree thats the worst case. But is that common ? Thats what I
was thinking when I proposed the alternate solution. I thought that can
happen only if most of the subtransactions abort, which again I thought
is not a normal case. But frankly I am not sure if my assumption is correct.

 I think getting rid of
  the linear search will be enough to fix the performance problem.


I wonder if a skewed binary search would help more ? For example,
if we know that the range of xids stored in the array is 1 to 1000 and
if we are searching for a number closer to 1000, we can break the
array into large,small parts instead of equal parts and then
search.

Well, may be I making simple things complicated ;-)

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:
 and it would have problems with a slow transaction
 generating a sparse set of subtransaction XIDs.

 I agree thats the worst case. But is that common ? Thats what I
 was thinking when I proposed the alternate solution. I thought that can
 happen only if most of the subtransactions abort, which again I thought
 is not a normal case.

No, I was thinking of the case where other sessions are chewing up XIDs
while the lots-of-subtransactions transaction runs.  If it's slow
enough, there could be very large gaps between the XIDs it acquires for
its subtransactions.  So you'd have a situation where the exact same
transaction processing might or might not run out of memory depending
on what else happened meanwhile.  Not a very pleasant property.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Pavan Deolasee wrote:

Wait. Subtransaction ids are local to a transaction and always start from 1.
See this:

/*
 * reinitialize within-transaction counters
 */
s-subTransactionId = TopSubTransactionId;
currentSubTransactionId = TopSubTransactionId;


No, we're not talking about SubTransactionIds. We're talking about real 
XIDs assigned to subtransactions. Subtransactions are assigned regular 
XIDs as well, just like top-level transactions.


I can see where you were coming from with you suggestion now :-).

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Pavan Deolasee wrote:

On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:


 I didn't like it; it seemed overly complicated (consider dealing with
 XID wraparound),


We are talking about subtransactions here. I don't think we support
subtransaction wrap-around, do we ?


Imagine that you start a transaction just before transaction 
wrap-around, so that the top level XID is 2^31-10. Then you start 20 
subtransactions. What XIDs will they get? Now how would you map those to 
a bitmap?


It's certainly possible, you could index the bitmap by the index from 
top transaction XID for example. But it does get a bit complicated.



and it would have problems with a slow transaction
 generating a sparse set of subtransaction XIDs.


I agree thats the worst case. But is that common ? Thats what I
was thinking when I proposed the alternate solution. I thought that can
happen only if most of the subtransactions abort, which again I thought
is not a normal case. But frankly I am not sure if my assumption is correct.


It's not that common to have hundreds of thousands of subtransactions to 
begin with..



I think getting rid of
 the linear search will be enough to fix the performance problem.


I wonder if a skewed binary search would help more ? For example,
if we know that the range of xids stored in the array is 1 to 1000 and
if we are searching for a number closer to 1000, we can break the
array into large,small parts instead of equal parts and then
search.


Possibly, but I doubt it's worth the trouble. The simple binary search 
solved the performance problem well enough. In the test case of the OP, 
with 30 subtransactions, with the patch, there was no longer any 
measurable difference whether you ran the SELECT COUNT(*) in the same 
transaction as the INSERTs or after a COMMIT.



Well, may be I making simple things complicated ;-)


I think so :-).

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 10:44 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


  Imagine that you start a transaction just before transaction
  wrap-around, so that the top level XID is 2^31-10. Then you start 20
  subtransactions. What XIDs will they get? Now how would you map those to
  a bitmap?


Wait. Subtransaction ids are local to a transaction and always start from 1.
See this:

/*
 * reinitialize within-transaction counters
 */
s-subTransactionId = TopSubTransactionId;
currentSubTransactionId = TopSubTransactionId;



  It's not that common to have hundreds of thousands of subtransactions to
  begin with..

True. But thats the case we are trying to solve here :-)


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-11 Thread Heikki Linnakangas

(moved to pgsql-patches, as there's a patch attached)

Tom Lane wrote:

Getting rid of the linked-list representation would be a win in a couple
of ways --- we'd not need the bogus list of XIDs support in pg_list.h,
and xactGetCommittedChildren would go away.  OTOH AtSubCommit_childXids
would get more expensive.


I couldn't let this case go, so I wrote a patch. I replaced the linked 
list with an array that's enlarged at AtSubCommit_childXids when necessary.


I couldn't measure any performance hit from the extra work of enlarging 
and memcpying the array. I tried two test cases:


1. Insert one row per subtransaction, and commit the subtransaction 
after each insert. This is like the OP's case.


printf(CREATE TABLE foo (id int4);\n);
printf(BEGIN;\n);
for(i=1; i = 10; i++)
{
printf(SAVEPOINT sp%d;\n, i);
printf(INSERT INTO foo VALUES (1);\n);
printf(RELEASE SAVEPOINT sp%d;\n, i);
}
printf(COMMIT;\n);

2. Insert one row per subtransaction, but leave the subtransaction in 
not-committed state


printf(CREATE TABLE foo (id int4, t text);\n);
printf(BEGIN;\n);
for(i=1; i = 10; i++)
{
printf(SAVEPOINT sp%d;\n, i);
printf(INSERT INTO foo VALUES (1, 'f');\n);
}
printf(COMMIT;\n);

Test case 1 is not bad, because we just keep appending to the parent 
array one at a time. Test case 2 might become slower, as the number of 
subtransactions increases, as at the commit of each subtransaction you 
need to enlarge the parent array and copy all the already-committed 
childxids to it. However, you hit the limit on amount of shared mem 
required for holding the per-xid locks before that becomes a problem, 
and the performance becomes dominated by other things anyway (notably 
LockReassignCurrentOwner).


I initially thought that using a single palloc'd array to hold all the 
XIDs would introduce a new limit on the number committed 
subtransactions, thanks to MaxAllocSize, but that's not the case. 
Without patch, we actually allocate an array like that anyway in 
xactGetCommittedChildren.


Elsewhere in our codebase where we use arrays that are enlarged as 
needed, we keep track of the allocated size and the used size of the 
array separately, and only call repalloc when the array fills up, and 
repalloc a larger than necessary array when it does. I chose to just 
call repalloc every time instead, as repalloc is smart enough to fall 
out quickly if the chunk the allocation was made in is already larger 
than the new size. There might be some gain avoiding the repeated 
repalloc calls, but I doubt it's worth the code complexity, and calling 
repalloc with a larger than necessary size can actually force it to 
unnecessarily allocate a new, larger chunk instead of reusing the old 
one. Thoughts on that?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/transam/xact.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xact.c,v
retrieving revision 1.258
diff -c -r1.258 xact.c
*** src/backend/access/transam/xact.c	4 Mar 2008 19:54:06 -	1.258
--- src/backend/access/transam/xact.c	11 Mar 2008 12:31:28 -
***
*** 130,136 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	List	   *childXids;		/* subcommitted child XIDs */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
--- 130,137 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	TransactionId	*childXids;	/* subcommitted child XIDs, in XID order */
! 	int			nChildXids;		/* # of subcommitted child XIDs */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
***
*** 156,162 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NIL,		/* subcommitted child Xids */
  	InvalidOid,	/* previous CurrentUserId setting */
  	false,		/* previous SecurityDefinerContext setting */
  	false,		/* entry-time xact r/o state */
--- 157,164 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NULL,		/* subcommitted child Xids */
! 	0,			/* # of subcommitted child Xids */
  	InvalidOid,	

Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-11 Thread Pavan Deolasee
On Tue, Mar 11, 2008 at 6:04 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:
 (moved to pgsql-patches, as there's a patch attached)


  I couldn't let this case go, so I wrote a patch. I replaced the linked
  list with an array that's enlarged at AtSubCommit_childXids when necessary.


We can replace the array of xids with an array of flags where i'th flag is
set to true if the corresponding subtransaction is committed. This would
make lookup O(1) operation. Of course, the downside is when the array
is sparse. Or we can switch to flag based representation once the array size
grows beyond a limit.

Thanks,
Pavan


-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-11 Thread Alvaro Herrera
Heikki Linnakangas wrote:

 I couldn't let this case go, so I wrote a patch. I replaced the linked  
 list with an array that's enlarged at AtSubCommit_childXids when 
 necessary.

Do you still need to palloc the return value from
xactGetCommittedChildren?  Perhaps you can save the palloc/memcpy/pfree
and just return the pointer to the array already in memory?

Not that it'll any much of a performance impact, but just for
cleanliness :-)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-11 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 I initially thought that using a single palloc'd array to hold all the 
 XIDs would introduce a new limit on the number committed 
 subtransactions, thanks to MaxAllocSize, but that's not the case. 
 Without patch, we actually allocate an array like that anyway in 
 xactGetCommittedChildren.

Right.

 Elsewhere in our codebase where we use arrays that are enlarged as 
 needed, we keep track of the allocated size and the used size of the 
 array separately, and only call repalloc when the array fills up, and 
 repalloc a larger than necessary array when it does. I chose to just 
 call repalloc every time instead, as repalloc is smart enough to fall 
 out quickly if the chunk the allocation was made in is already larger 
 than the new size. There might be some gain avoiding the repeated 
 repalloc calls, but I doubt it's worth the code complexity, and calling 
 repalloc with a larger than necessary size can actually force it to 
 unnecessarily allocate a new, larger chunk instead of reusing the old 
 one. Thoughts on that?

Seems like a pretty bad idea to me, as the behavior you're counting on
only applies to chunks up to 8K or thereabouts.  In a situation where
you are subcommitting lots of XIDs one at a time, this is likely to have
quite awful behavior (or at least, you're at the mercy of the local
malloc library as to how bad it is).  I'd go with the same
double-it-each-time-needed approach we use elsewhere.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches