Re: [PATCHES] [BUGS] BUG #3681: fillers are NULL in pgbench

2008-03-12 Thread Bruce Momjian

This was applied by Tom.  Thanks.

---

ITAGAKI Takahiro wrote:
 
 Tom Lane [EMAIL PROTECTED] wrote:
 
  ITAGAKI Takahiro [EMAIL PROTECTED] writes:
   All of filler fields in branches, tellers and history is NULL. It is
   probabbly a mistake because there are fields of char(22-88) in the table
   definitions.
   TPC-B requires at least 100 bytes per row for all tables used in it.
  
  I'm not in favor of changing this.  pgbench has never pretended to be
  really TPC-B, nor has anyone ever tried to compare its numbers against
  other TPC-B numbers.  On the other hand, people *do* compare pgbench
  numbers to itself over time, and if we make a change like this it will
  break comparability of the results.
 
 Ok, I feel it reasonable.
 The attached is a patch to mention it in the source code.
 
 Regards,
 ---
 ITAGAKI Takahiro
 NTT Open Source Software Center
 

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] Proposed patch for LISTEN/NOTIFY race condition

2008-03-12 Thread Alvaro Herrera
Tom Lane wrote:

 The patch disallows LISTEN/NOTIFY in a prepared transaction, and there
 is also a side effect on whether a transaction can see entries in
 pg_listener for its own uncommitted LISTEN commands.

I wonder if it would've been easier to make NOTIFY use SnapshotDirty to
examine pg_listener, thus causing it to see the uncommitted rows.  Would
that work at all?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:
Elsewhere in our codebase where we use arrays that are enlarged as 
needed, we keep track of the allocated size and the used size of the 
array separately, and only call repalloc when the array fills up, and 
repalloc a larger than necessary array when it does. I chose to just 
call repalloc every time instead, as repalloc is smart enough to fall 
out quickly if the chunk the allocation was made in is already larger 
than the new size. There might be some gain avoiding the repeated 
repalloc calls, but I doubt it's worth the code complexity, and calling 
repalloc with a larger than necessary size can actually force it to 
unnecessarily allocate a new, larger chunk instead of reusing the old 
one. Thoughts on that?


Seems like a pretty bad idea to me, as the behavior you're counting on
only applies to chunks up to 8K or thereabouts. 


Oh, you're right. Though I'm sure libc realloc has all kinds of smarts 
as well, it does seem better to not rely too much on that.



In a situation where
you are subcommitting lots of XIDs one at a time, this is likely to have
quite awful behavior (or at least, you're at the mercy of the local
malloc library as to how bad it is).  I'd go with the same
double-it-each-time-needed approach we use elsewhere.


Yep, patch attached. I also changed xactGetCommittedChildren to return 
the original array instead of copying it, as Alvaro suggested.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
Index: src/backend/access/transam/twophase.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/twophase.c,v
retrieving revision 1.39
diff -c -r1.39 twophase.c
*** src/backend/access/transam/twophase.c	1 Jan 2008 19:45:48 -	1.39
--- src/backend/access/transam/twophase.c	12 Mar 2008 12:45:13 -
***
*** 827,833 
  		save_state_data(children, hdr.nsubxacts * sizeof(TransactionId));
  		/* While we have the child-xact data, stuff it in the gxact too */
  		GXactLoadSubxactData(gxact, hdr.nsubxacts, children);
- 		pfree(children);
  	}
  	if (hdr.ncommitrels  0)
  	{
--- 827,832 
Index: src/backend/access/transam/xact.c
===
RCS file: /home/hlinnaka/pgcvsrepository/pgsql/src/backend/access/transam/xact.c,v
retrieving revision 1.258
diff -c -r1.258 xact.c
*** src/backend/access/transam/xact.c	4 Mar 2008 19:54:06 -	1.258
--- src/backend/access/transam/xact.c	12 Mar 2008 13:41:10 -
***
*** 130,136 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	List	   *childXids;		/* subcommitted child XIDs */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
--- 130,138 
  	int			gucNestLevel;	/* GUC context nesting depth */
  	MemoryContext curTransactionContext;		/* my xact-lifetime context */
  	ResourceOwner curTransactionOwner;	/* my query resources */
! 	TransactionId	*childXids;	/* subcommitted child XIDs, in XID order */
! 	int			nChildXids;		/* # of subcommitted child XIDs */
! 	int			maxChildXids;	/* allocated size of childXids */
  	Oid			prevUser;		/* previous CurrentUserId setting */
  	bool		prevSecDefCxt;	/* previous SecurityDefinerContext setting */
  	bool		prevXactReadOnly;		/* entry-time xact r/o state */
***
*** 156,162 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NIL,		/* subcommitted child Xids */
  	InvalidOid,	/* previous CurrentUserId setting */
  	false,		/* previous SecurityDefinerContext setting */
  	false,		/* entry-time xact r/o state */
--- 158,166 
  	0,			/* GUC context nesting depth */
  	NULL,		/* cur transaction context */
  	NULL,		/* cur transaction resource owner */
! 	NULL,		/* subcommitted child Xids */
! 	0,			/* # of subcommitted child Xids */
! 	0,			/* allocated size of childXids */
  	InvalidOid,	/* previous CurrentUserId setting */
  	false,		/* previous SecurityDefinerContext setting */
  	false,		/* entry-time xact r/o state */
***
*** 559,565 
  	 */
  	for (s = CurrentTransactionState; s != NULL; s = s-parent)
  	{
! 		ListCell   *cell;
  
  		if (s-state == TRANS_ABORT)
  			continue;
--- 563,569 
  	 */
  	for (s = CurrentTransactionState; s != NULL; s = s-parent)
  	{
! 		int low, high;
  
  		if (s-state == TRANS_ABORT)
  			continue;
***
*** 567,576 
  			continue;			/* it can't have any child XIDs either */
  		if (TransactionIdEquals(xid, s-transactionId))
  			return true;
! 		

Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


  Yep, patch attached. I also changed xactGetCommittedChildren to return
  the original array instead of copying it, as Alvaro suggested.


Any comments on the flag based approach I suggested earlier ? Am I
missing some normal scenario where it won't work well ?

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 On Wed, Mar 12, 2008 at 7:13 PM, Heikki Linnakangas
 [EMAIL PROTECTED] wrote:
 Yep, patch attached. I also changed xactGetCommittedChildren to return
 the original array instead of copying it, as Alvaro suggested.

 Any comments on the flag based approach I suggested earlier ?

I didn't like it; it seemed overly complicated (consider dealing with
XID wraparound), and it would have problems with a slow transaction
generating a sparse set of subtransaction XIDs.  I think getting rid of
the linear search will be enough to fix the performance problem.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:


  I didn't like it; it seemed overly complicated (consider dealing with
  XID wraparound),

We are talking about subtransactions here. I don't think we support
subtransaction wrap-around, do we ?

 and it would have problems with a slow transaction
  generating a sparse set of subtransaction XIDs.

I agree thats the worst case. But is that common ? Thats what I
was thinking when I proposed the alternate solution. I thought that can
happen only if most of the subtransactions abort, which again I thought
is not a normal case. But frankly I am not sure if my assumption is correct.

 I think getting rid of
  the linear search will be enough to fix the performance problem.


I wonder if a skewed binary search would help more ? For example,
if we know that the range of xids stored in the array is 1 to 1000 and
if we are searching for a number closer to 1000, we can break the
array into large,small parts instead of equal parts and then
search.

Well, may be I making simple things complicated ;-)

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:
 and it would have problems with a slow transaction
 generating a sparse set of subtransaction XIDs.

 I agree thats the worst case. But is that common ? Thats what I
 was thinking when I proposed the alternate solution. I thought that can
 happen only if most of the subtransactions abort, which again I thought
 is not a normal case.

No, I was thinking of the case where other sessions are chewing up XIDs
while the lots-of-subtransactions transaction runs.  If it's slow
enough, there could be very large gaps between the XIDs it acquires for
its subtransactions.  So you'd have a situation where the exact same
transaction processing might or might not run out of memory depending
on what else happened meanwhile.  Not a very pleasant property.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Pavan Deolasee wrote:

Wait. Subtransaction ids are local to a transaction and always start from 1.
See this:

/*
 * reinitialize within-transaction counters
 */
s-subTransactionId = TopSubTransactionId;
currentSubTransactionId = TopSubTransactionId;


No, we're not talking about SubTransactionIds. We're talking about real 
XIDs assigned to subtransactions. Subtransactions are assigned regular 
XIDs as well, just like top-level transactions.


I can see where you were coming from with you suggestion now :-).

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Heikki Linnakangas

Pavan Deolasee wrote:

On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane [EMAIL PROTECTED] wrote:


 I didn't like it; it seemed overly complicated (consider dealing with
 XID wraparound),


We are talking about subtransactions here. I don't think we support
subtransaction wrap-around, do we ?


Imagine that you start a transaction just before transaction 
wrap-around, so that the top level XID is 2^31-10. Then you start 20 
subtransactions. What XIDs will they get? Now how would you map those to 
a bitmap?


It's certainly possible, you could index the bitmap by the index from 
top transaction XID for example. But it does get a bit complicated.



and it would have problems with a slow transaction
 generating a sparse set of subtransaction XIDs.


I agree thats the worst case. But is that common ? Thats what I
was thinking when I proposed the alternate solution. I thought that can
happen only if most of the subtransactions abort, which again I thought
is not a normal case. But frankly I am not sure if my assumption is correct.


It's not that common to have hundreds of thousands of subtransactions to 
begin with..



I think getting rid of
 the linear search will be enough to fix the performance problem.


I wonder if a skewed binary search would help more ? For example,
if we know that the range of xids stored in the array is 1 to 1000 and
if we are searching for a number closer to 1000, we can break the
array into large,small parts instead of equal parts and then
search.


Possibly, but I doubt it's worth the trouble. The simple binary search 
solved the performance problem well enough. In the test case of the OP, 
with 30 subtransactions, with the patch, there was no longer any 
measurable difference whether you ran the SELECT COUNT(*) in the same 
transaction as the INSERTs or after a COMMIT.



Well, may be I making simple things complicated ;-)


I think so :-).

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

2008-03-12 Thread Pavan Deolasee
On Wed, Mar 12, 2008 at 10:44 PM, Heikki Linnakangas
[EMAIL PROTECTED] wrote:


  Imagine that you start a transaction just before transaction
  wrap-around, so that the top level XID is 2^31-10. Then you start 20
  subtransactions. What XIDs will they get? Now how would you map those to
  a bitmap?


Wait. Subtransaction ids are local to a transaction and always start from 1.
See this:

/*
 * reinitialize within-transaction counters
 */
s-subTransactionId = TopSubTransactionId;
currentSubTransactionId = TopSubTransactionId;



  It's not that common to have hundreds of thousands of subtransactions to
  begin with..

True. But thats the case we are trying to solve here :-)


Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] Proposed patch for LISTEN/NOTIFY race condition

2008-03-12 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 The patch disallows LISTEN/NOTIFY in a prepared transaction, and there
 is also a side effect on whether a transaction can see entries in
 pg_listener for its own uncommitted LISTEN commands.

 I wonder if it would've been easier to make NOTIFY use SnapshotDirty to
 examine pg_listener, thus causing it to see the uncommitted rows.  Would
 that work at all?

I don't think so.  NOTIFY has to update the rows, not only see them,
and you can't update someone else's uncommitted row.  In any case we'd
be venturing into uncharted territory, which isn't something I'd want
to backpatch.  The existing patch is just altering the timing of the
same actions we take already, so it seems a lot safer.  IMHO anyway.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches