Re: [PERFORM] Postgresql in a Virtual Machine

2013-11-26 Thread David Kerr
On Tue, Nov 26, 2013 at 11:18:41AM -0800, Craig James wrote: - On Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot wrote: - - > On Nov 26, 2013, at 9:24 AM, Craig James wrote: - > - > So far I'm impressed by what I've read about Amazon's Postgres instances. - > Maybe the reality will be disappointing, b

Re: [PERFORM] 57 minute SELECT

2013-10-03 Thread David Kerr
On Thu, Oct 03, 2013 at 09:20:52AM -0700, David Kerr wrote: - On Thu, Oct 03, 2013 at 01:47:29AM +, Samuel Stearns wrote: - - Thanks, Claudio: - - - - http://explain.depesz.com/s/WJQx - - You're spending a lot of time in the hash join which can kill a system with - low ram. - - Yo

Re: [PERFORM] 57 minute SELECT

2013-10-03 Thread David Kerr
On Thu, Oct 03, 2013 at 01:47:29AM +, Samuel Stearns wrote: - Thanks, Claudio: - - http://explain.depesz.com/s/WJQx You're spending a lot of time in the hash join which can kill a system with low ram. You may, just for fun, want to try the query with enable_hashjoin=false. -- Sent via pg

Re: [PERFORM] COPY TO and VACUUM

2013-09-05 Thread David Kerr
Hi Roberto, Yes you could partition by vendor and then truncate the partition before loading. Truncate reclaims space immediately and is generally much faster than delete. On Thu, Sep 05, 2013 at 06:05:08PM +0200, Roberto Grandi wrote: - Hi Jeff, - - the proble is that when continously updloa

Re: [PERFORM] Looks like merge join planning time is too big, 55 seconds

2013-08-01 Thread David Kerr
On Thu, Aug 01, 2013 at 07:17:27PM +0400, Sergey Burladyan wrote: - Sergey Burladyan writes: - - > # explain - > # select i.item_id, u.user_id from items i - > # left join users u on u.user_id = i.user_id - > # where item_id = 169946840; - > QUERY PLAN - > --

Re: [PERFORM] Seq Scan vs Index on Identical Tables in Two Different Databases

2013-07-17 Thread David Kerr
On Wed, Jul 17, 2013 at 07:50:06PM +, Ellen Rothman wrote: - I have the same table definition in two different databases on the same computer. When I explain a simple query in both of them, one database uses a sequence scan and the other uses an index scan. If I try to run the Seq Scan vers

Re: [PERFORM] Process 11812 still waiting for ExclusiveLock on extension of relation

2012-07-18 Thread David Kerr
On Jul 18, 2012, at 5:08 AM, Sergey Konoplev wrote: > Hi, > > On Tue, Jul 17, 2012 at 7:57 PM, David Kerr wrote: >> I suspect that this is related to a sustained heavy load that would stop >> autovacuum from >> getting at this table... Does that sound plausible?

[PERFORM] Process 11812 still waiting for ExclusiveLock on extension of relation

2012-07-17 Thread David Kerr
Howdy, I've got a couple of tables that are taking a little longer than normal to extend, resulting in some slow inserts. They're fairly large tables, ~200GB pg_total_relation_size (90GB for just the table) I suspect that this is related to a sustained heavy load that would stop autovacuum f

[PERFORM] slow prepare, lots of semop calls.

2012-07-12 Thread David Kerr
I think my original post here might have gotten caught in a spamtrap, so re-trying, I apologize if it ends up being a duplicate. I also forgot to mention that I'm on PG9.1.1 / RHEL 6.2 x64 I believe this is the reason for the behavior i was seeing in this post as well. http://archives.postgresql

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On 7/9/2012 11:14 PM, Maxim Boguk wrote: On Tue, Jul 10, 2012 at 4:03 PM, David Kerr mailto:d...@mr-paradox.net>> wrote: On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote: But what appears to be happening is that all of the data is being written out at the end

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote: > > > But what appears to be happening is that all of the data is being written out > at the end of the checkpoint. > > This happens at every checkpoint while the system is under load. > > I get the feeling that this isn't the correct behavior

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On Jul 9, 2012, at 10:52 PM, Jeff Janes wrote: > On Mon, Jul 9, 2012 at 10:39 PM, David Kerr wrote: >> >> I thought that the idea of checkpoint_completion_target was that we try to >> finish writing >> out the data throughout the entire checkpoint (leaving some room

[PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
Howdy! I'm trying to figure out why checkpointing it completely pegging my I/O under moderate to high write load, I'm on PG9.1.1, RHEL 6.2 x64 checkpoint_completion_target = 0.7 checkpoint_timeout = 10m Jul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint starting: time

[PERFORM] What would effect planning time?

2012-07-05 Thread David Kerr
I spent a good chunk of today trying to chase down why a query on one box ran in 110ms and on another, smaller box it ran in 10ms. There was no other activity on either box. Both boxes are PG9.1.1 RHEL 6.2 x64. the faster box is a smallish VM. the other box is a big 40core/256GB box. The plans b

Re: [PERFORM] Drop statistics?

2012-07-04 Thread David Kerr
On Jul 3, 2012, at 10:16 AM, Bruce Momjian wrote: > On Fri, Jun 22, 2012 at 11:04:36AM -0700, David Kerr wrote: >> On Fri, Jun 22, 2012 at 01:27:51PM -0400, Tom Lane wrote: >> - David Kerr writes: >> - > I'm trying to work through a root cause on a performance probl

Re: [PERFORM] "global/pgstat.stat" corrupt

2012-06-22 Thread David Kerr
On Fri, Jun 22, 2012 at 03:49:00PM -0400, Tom Lane wrote: - David Kerr writes: - > I just restored a DB from a cold backup (pg_ctl stop -m fast) - - > When starting the DB I see: - > LOG: corrupted statistics file "global/pgstat.stat" - - Is that repeatable? It wouldn

[PERFORM] "global/pgstat.stat" corrupt

2012-06-22 Thread David Kerr
Howdy, I just restored a DB from a cold backup (pg_ctl stop -m fast) When starting the DB I see: LOG: corrupted statistics file "global/pgstat.stat" When I look at the filesystem I don't see a global/pgstat.stat file but i do see a pg_stat_tmp/pgstat.stat is that PG rebuilding the corrupt fi

Re: [PERFORM] Drop statistics?

2012-06-22 Thread David Kerr
On Fri, Jun 22, 2012 at 01:27:51PM -0400, Tom Lane wrote: - David Kerr writes: - > I'm trying to work through a root cause on a performance problem. I'd like to - > be able to "show" that a problem was fixed by analyzing the table. - - > what i've done is -

[PERFORM] Drop statistics?

2012-06-22 Thread David Kerr
I'm trying to work through a root cause on a performance problem. I'd like to be able to "show" that a problem was fixed by analyzing the table. what i've done is set default_statistics_target=1 analyze That gets rid of most of the rows in pg_stats, but i'm still getting decent performance. I

Re: [PERFORM] pg_autovacuum in PG9.x

2012-04-03 Thread David Kerr
On 04/03/2012 06:40 PM, Brett Mc Bride wrote: > Hi Dave, > It's part of core now: http://www.postgresql.org/docs/9.1/static /routine-vacuuming.html#AUTOVACUUM AH awesome, thanks. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription

[PERFORM] pg_autovacuum in PG9.x

2012-04-03 Thread David Kerr
Howdy, What is/is there a replacement for pg_autovacuum in PG9.0+ ? I haven't had much luck looking for it in the docs. Thanks! Dave -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-p

Re: [PERFORM] Very long deletion time on a 200 GB database

2012-02-28 Thread David Kerr
On 02/27/2012 12:08 AM, Reuven M. Lerner wrote: Hi, everyone. I wanted to thank you again for your help on the huge delete problem that I was experiencing. After a lot of trial and error, we finally came to the conclusion that deleting this much data in the time frame that they need, on underpow

Re: [PERFORM] Performance issues

2011-03-07 Thread David Kerr
On Mon, Mar 07, 2011 at 10:49:48PM +0100, Andreas For Tollefsen wrote: - The synchronous_commit off increased the TPS, but not the speed of the below - query. - - Oleg: - This is a query i am working on now. It creates an intersection of two - geometries. One is a grid of 0.5 x 0.5 decimal degree

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-09 Thread David Kerr
On Thu, Sep 09, 2010 at 10:38:16AM -0400, Alvaro Herrera wrote: - Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010: - - > Thanks for the insight. we're currently in performance testing of the - > app. Currently, the JVM is the bottleneck, once we get past that - > i'm sure it

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 05:27:24PM -0500, Kevin Grittner wrote: - David Kerr wrote: - - > My assertian/hope is that the saturation point - > on this machine should be higher than most. - - Here's another way to think about it -- how long do you expect your - average database req

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 04:51:17PM -0500, Kevin Grittner wrote: - David Kerr wrote: - - > Hmm, i'm not following you. I've got 48 cores. that means my - > sweet-spot active connections would be 96. - - Plus your effective spindle count. That can be hard to calculate, - but yo

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 03:56:24PM -0500, Kevin Grittner wrote: - David Kerr wrote: - - > Actually, this is real.. that's 2000 connections - connection - > pooled out to 20k or so. (although i'm pushing for closer to 1000 - > connections). - > - > I know that'

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 04:35:28PM -0400, Tom Lane wrote: - David Kerr writes: - > should i be running pgbench differently? I tried increasing the # of threads - > but that didn't increase the number of backend's and i'm trying to simulate - > 2000 physical backend proc

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 03:44:36PM -0400, Tom Lane wrote: - Greg Smith writes: - > Tom Lane wrote: - >> So I think you could get above the FD_SETSIZE limit with a bit of - >> hacking if you were using 9.0's pgbench. No chance with 8.3 though. - - > I believe David can do this easily enough by co

Re: [PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
On Wed, Sep 08, 2010 at 03:27:34PM -0400, Greg Smith wrote: - Tom Lane wrote: - >As of the 9.0 release, it's possible to run pgbench in a "multi thread" - >mode, and if you forced the subprocess rather than thread model it looks - >like the select() limit would be per subprocess rather than global.

[PERFORM] pgbench could not send data to client: Broken pipe

2010-09-08 Thread David Kerr
Howdy, I'm running pgbench with a fairly large # of clients and getting this error in my PG log file. Here's the command: ./pgbench -c 1100 testdb -l I get: LOG: could not send data to client: Broken pipe (I had to modify the pgbench.c file to make it go that high, i changed: MAXCLIENTS = 204

Re: [PERFORM] PARSE WAITING

2010-08-23 Thread David Kerr
On Mon, Aug 23, 2010 at 06:23:25PM -0400, Alvaro Herrera wrote: - Excerpts from David Kerr's message of lun ago 23 18:15:56 -0400 2010: - > Howdy all, - > - > We're doing some performance testing, and when we scaled it our app up to about 250 concurrent users - > we started seeing a bunch of proc

Re: [PERFORM] PARSE WAITING

2010-08-23 Thread David Kerr
probably waiting on the xlog directory that's filled up... ->->blamo move along, nothing to see here =) Dave On Mon, Aug 23, 2010 at 03:15:56PM -0700, David Kerr wrote: - Howdy all, - - We're doing some performance testing, and when we scaled it our app up to about 250

[PERFORM] PARSE WAITING

2010-08-23 Thread David Kerr
Howdy all, We're doing some performance testing, and when we scaled it our app up to about 250 concurrent users we started seeing a bunch of processes sititng in "PARSE WAITING" state. Can anyone give me insite on what this means? what's the parse waiting for? Thanks Dave -- Sent via pgsql-p

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote: - David Kerr wrote: - >the db, xlog and logs are all on separate areas of the SAN. - >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't - >expect - >contention there. - > - - Just be

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote: - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - > that thought occured to me while I was testing this. I ran a vacuumdb -z - > on my database during the load and it didn't impact performance at all. - - The window to

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote: - David Kerr wrote: - - > Incidentally the code is written to work like this : - > - > while (read X lines in file){ - > Process those lines. - > write lines to DB. - > } - - Unless you're selecting from m

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote: - On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote: - > - > I'm logging via syslog, I've had trouble with that before. when i moved to syslog-ng - > on my dev environments that mostly resoved the probelm for me.

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote: - On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote: - > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote: - > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - > - - > - You can absolutely use copy

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote: - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - - > that thought occured to me while I was testing this. I ran a vacuumdb -z - > on my database during the load and it didn't impact performance at all. - > -

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote: - On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote: - > My initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give - > any indication that we had resource issues. - > -

[PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
Howdy all, I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9. 64bit OS. No users currently. I've got a J2EE app that loads data into the DB, it's got logic behind it so it's not a simple bulk load, so i don't think we can use copy. Bas

Re: [PERFORM] Optimizer + bind variables

2009-11-03 Thread David Kerr
On Wed, Nov 04, 2009 at 11:02:22AM +1100, Chris wrote: - David Kerr wrote: - >On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote: - >- David Kerr wrote: - No. - - This is explained in the notes here: - - http://www.postgresql.org/docs/current/static/sql-prepare.html and i&#

[PERFORM] Optimizer + bind variables

2009-11-03 Thread David Kerr
Does/is it possible for the PG optimizer come up with differnet plans when you're using bind variables vs when you send static values? like if my query was select * from users (add a bunch of complex joins) where username = 'dave' vs select * from users (add a bunch of complex joins) where user

Re: [PERFORM] Optimizer + bind variables

2009-11-03 Thread David Kerr
On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote: - David Kerr wrote: - > Does/is it possible for the PG optimizer come up with differnet plans when - > you're using bind variables vs when you send static values? - - Yes, if the bind variable form causes your DB access dr

Re: [PERFORM] Under the hood of views

2009-08-13 Thread David Kerr
On Thu, Aug 13, 2009 at 05:28:01PM +0100, Richard Huxton wrote: - David Kerr wrote: - > - >create view test as - >select a,b,c,d,e,f,g from testtable; - > - >select a from test; - > - >(does the engine retrieve b-g?) - - Shouldn't - the query just gets rewritten mac

[PERFORM] Under the hood of views

2009-08-13 Thread David Kerr
developer came by and asked me an interesting question. If he has a view with 20 columns in it, and he selects a specific column from the view in his query. Does the engine when accessing the view return all columns? or is it smart enough to know to just retrive the one? example: create view

Re: [PERFORM] Looking for installations with a large number of concurrent users

2009-06-10 Thread David Kerr
On Wed, Jun 10, 2009 at 11:40:21AM -0500, Kevin Grittner wrote: - We're on SLES 10 SP 2 and are handling a web site which gets two to - three million hits per day, running tens of millions of queries, while - functioning as a replication target receiving about one million - database transactions to

[PERFORM] Looking for installations with a large number of concurrent users

2009-06-09 Thread David Kerr
Hello all, We're implementing a fairly large J2EE application, I'm estimating around 450,000 concurrent users at high peak. Performing reads and writes and we have a very high performance requirement. I'll be using connection pooling (probably the pooling delivered with Geronimo). I'd like to

Re: [PERFORM] Question on pgbench output

2009-04-05 Thread David Kerr
Tom Lane wrote: Simon Riggs writes: On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote: 400 concurrent users doesn't mean that they're pulling 1.5 megs / second every second. There's a world of difference between 400 connected and 400 concurrent users. You've been te

Re: [PERFORM] Question on pgbench output

2009-04-04 Thread David Kerr
On Fri, Apr 03, 2009 at 10:35:58PM -0400, Greg Smith wrote: - On Fri, 3 Apr 2009, Tom Lane wrote: - - and a bunch of postmaster ones, with "-c" (or by hitting "c" while top is - running) you can even see what they're all doing. If the pgbench process - is consuming close to 100% of a CPU's time

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
Gah - sorry, setting up pgbouncer for my Plan B. I meant -pgbench- Dave Kerr On Fri, Apr 03, 2009 at 04:34:58PM -0700, David Kerr wrote: - On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote: - - Greg Smith writes: - - > pgbench is extremely bad at simulating large numbers of clie

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote: - Greg Smith writes: - > pgbench is extremely bad at simulating large numbers of clients. The - > pgbench client operates as a single thread that handles both parsing the - > input files, sending things to clients, and processing their r

Re: [PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote: - > I'm not really sure how to evaulate the tps, I've read in this forum that - > some folks are getting 2k tps so this wouldn't appear to be good to me. - - Well, you're running a custom transaction definition so comparing your - number to

[PERFORM] Question on pgbench output

2009-04-03 Thread David Kerr
Hello! Sorry for the wall of text here. I'm working on a performance POC and I'm using pgbench and could use some advice. Mostly I want to ensure that my test is valid and that I'm using pgbench properly. The story behind the POC is that my developers want to pull web items from the database (no