Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-11 Thread Jeff Janes
On Mon, Feb 6, 2012 at 6:38 AM, Robert Haas wrote: > On Sat, Feb 4, 2012 at 2:13 PM, Jeff Janes wrote: >> We really need to nail that down.  Could you post the scripts (on the >> wiki) you use for running the benchmark and making the graph?  I'd >> like to see how much work it would be for me to

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-07 Thread Greg Smith
On 01/24/2012 03:53 PM, Robert Haas wrote: There are two graphs for each branch. The first is a scatter plot of latency vs. transaction time. I found that graph hard to understand, though; I couldn't really tell what I was looking at. So I made a second set of graphs which graph number of comp

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-06 Thread Robert Haas
On Sat, Feb 4, 2012 at 2:13 PM, Jeff Janes wrote: > We really need to nail that down.  Could you post the scripts (on the > wiki) you use for running the benchmark and making the graph?  I'd > like to see how much work it would be for me to change it to detect > checkpoints and do something like c

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-04 Thread Jeff Janes
On Wed, Feb 1, 2012 at 9:47 AM, Robert Haas wrote: > On Wed, Jan 25, 2012 at 8:49 AM, Robert Haas wrote: >> On Tue, Jan 24, 2012 at 4:28 PM, Simon Riggs wrote: >>> I think we should be working to commit XLogInsert and then Group >>> Commit, then come back to the discussion. >> >> I definitely ag

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-03 Thread Kevin Grittner
Robert Haas wrote: > A couple of things stand out at me from these graphs. First, some > of these transactions had really long latency. Second, there are a > remarkable number of seconds all of the test during which no > transactions at all manage to complete, sometimes several seconds > in a ro

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-01 Thread Simon Riggs
On Wed, Feb 1, 2012 at 5:47 PM, Robert Haas wrote: > On Wed, Jan 25, 2012 at 8:49 AM, Robert Haas wrote: >> On Tue, Jan 24, 2012 at 4:28 PM, Simon Riggs wrote: >>> I think we should be working to commit XLogInsert and then Group >>> Commit, then come back to the discussion. >> >> I definitely ag

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-02-01 Thread Robert Haas
On Wed, Jan 25, 2012 at 8:49 AM, Robert Haas wrote: > On Tue, Jan 24, 2012 at 4:28 PM, Simon Riggs wrote: >> I think we should be working to commit XLogInsert and then Group >> Commit, then come back to the discussion. > > I definitely agree that those two have way more promise than anything > el

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-25 Thread Nathan Boley
> I actually don't know much about the I/O subsystem, but, no, WAL is > not separated from data.  I believe $PGDATA is on a SAN, but I don't > know anything about its characteristics. The computer is here: http://www.supermicro.nl/Aplus/system/2U/2042/AS-2042G-6RF.cfm $PGDATA is on a 5 disk SATA

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-25 Thread Robert Haas
On Wed, Jan 25, 2012 at 12:00 PM, Jeff Janes wrote: > On Tue, Jan 24, 2012 at 12:53 PM, Robert Haas wrote: >> Early yesterday morning, I was able to use Nate Boley's test machine >> do a single 30-minute pgbench run at scale factor 300 using a variety >> of trees built with various patches, and w

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-25 Thread Jeff Janes
On Wed, Jan 25, 2012 at 9:09 AM, Robert Haas wrote: > On Wed, Jan 25, 2012 at 12:00 PM, Jeff Janes wrote: >> On Tue, Jan 24, 2012 at 12:53 PM, Robert Haas wrote: >>> Early yesterday morning, I was able to use Nate Boley's test machine >>> do a single 30-minute pgbench run at scale factor 300 usi

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-25 Thread Jeff Janes
On Tue, Jan 24, 2012 at 12:53 PM, Robert Haas wrote: > Early yesterday morning, I was able to use Nate Boley's test machine > do a single 30-minute pgbench run at scale factor 300 using a variety > of trees built with various patches, and with the -l option added to > track latency on a per-transa

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-25 Thread Robert Haas
On Tue, Jan 24, 2012 at 4:28 PM, Simon Riggs wrote: > I think we should be working to commit XLogInsert and then Group > Commit, then come back to the discussion. I definitely agree that those two have way more promise than anything else on the table. However, now that I understand how badly we'

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-24 Thread Pavan Deolasee
On Wed, Jan 25, 2012 at 2:23 AM, Robert Haas wrote: > Early yesterday morning, I was able to use Nate Boley's test machine > do a single 30-minute pgbench run at scale factor 300 using a variety > of trees built with various patches, and with the -l option added to > track latency on a per-transac

Re: [HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-24 Thread Simon Riggs
On Tue, Jan 24, 2012 at 8:53 PM, Robert Haas wrote: > > do a single 30-minute pgbench run at scale factor 300 using a variety Nice A minor but necessary point: Repeated testing of the Group commit patch when you have synch commit off is clearly pointless, so publishing numbers for that without s

[HACKERS] some longer, larger pgbench tests with various performance-related patches

2012-01-24 Thread Robert Haas
Early yesterday morning, I was able to use Nate Boley's test machine do a single 30-minute pgbench run at scale factor 300 using a variety of trees built with various patches, and with the -l option added to track latency on a per-transaction basis. All tests were done using 32 clients and permane