Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Greg Smith
David Kerr wrote: I don't think it's anything in the Db that's causing it. ( drop and re-create the db between tests) I actually suspect a hardware issue somewhere. You might find my "Database Hardware Benchmarking" talk, available at http://projects.2ndquadrant.com/talks , useful to help

Re: [PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Tom Lane
Scott Marlowe writes: > On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging wrote: >> I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have >> noticed that on the 8.4.* versions, a lot of our code is either taking much >> longer to complete, or never completing.  I think I have isolate

Re: [PERFORM] significant slow down with various LIMIT

2010-04-20 Thread Kevin Grittner
norn wrote: >> (1) Try it without the ORDER BY clause and the LIMIT. > W/o the 'order by' it works instantly (about 1ms!) > W/o the limit it takes 1.4 seconds >>(2) Temporarily take that top index out of consideration > It works nice! Query takes about 0.6 seconds as expected! > So, as we

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kris Jurka
On Tue, 20 Apr 2010, Dave Crooke wrote: a. Make setFetchSize(1) the default The reason this is not done is that the mechanism used for fetching a piece of the results at a time can change the query plan used if using a PreparedStatement. There are three ways to plan a PreparedStatemen

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
I digest this down to "this is the best that can be achieved on a connection that's single threaded" I think the big difference with Oracle is this: i. in Oracle, a SELECT does not have to be a transaction, in the sense that PG's SELECT does ... but in Oracle, a SELECT can fail mid-stream if you

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 04:26:52PM -0400, Greg Smith wrote: - David Kerr wrote: - >the db, xlog and logs are all on separate areas of the SAN. - >separate I/O controllers, etc on the SAN. it's setup well, I wouldn't - >expect - >contention there. - > - - Just because you don't expect it doesn't

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Greg Smith
David Kerr wrote: the db, xlog and logs are all on separate areas of the SAN. separate I/O controllers, etc on the SAN. it's setup well, I wouldn't expect contention there. Just because you don't expect it doesn't mean it's not there. Particularly something as complicated as a SAN setup, p

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke wrote: > a. the fact that Statement.executeQuery("select * from > huge_table") works out of the box with every one of those > databases, but results in java.langOutOfMemory with PG without > special setup. Again, this is to the letter of the standard, it's > just not very user friend

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Nikolas Everett
On Tue, Apr 20, 2010 at 3:29 PM, Dave Crooke wrote: > > I wouldn't hold MySQL up to be a particularly good implmentation of > anything, other than speed (MyISAM) and usability (the CLI) I find > Oracle's JDBC implmentation to be both user friendly and (largely) standards > compliant. > Dave,

Re: [PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging wrote: > Hi, > > I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3.  I have > noticed that on the 8.4.* versions, a lot of our code is either taking much > longer to complete, or never completing.  I think I have isolated the > problem to q

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
I don't want to get into a big debate about standards, but I will clarify a couple of things inline below. My key point is that the PG JDBC driver resets people's expecations who have used JDBC with other databases, and that is going to reflect negatively on Postgres if Postgres is in the minority

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote: > On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote: > - So are you logging to the same drive that has pg_xlog and your > - data/base directory on this machine? > - > > the db, xlog and logs are all on separate areas of the SAN. > > s

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:47 PM, David Kerr wrote: > On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote: > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: > - > that thought occured to me while I was testing this. I ran a vacuumdb -z > - > on my database during the load and it didn

[PERFORM] performance change from 8.3.1 to later releases

2010-04-20 Thread Roger Ging
Hi, I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have noticed that on the 8.4.* versions, a lot of our code is either taking much longer to complete, or never completing. I think I have isolated the problem to queries using in(), not in() or not exists(). I've put toge

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:15:19PM -0400, Robert Haas wrote: - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - > that thought occured to me while I was testing this. I ran a vacuumdb -z - > on my database during the load and it didn't impact performance at all. - - The window to run ANALYZE

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:17:02PM -0500, Kevin Grittner wrote: - David Kerr wrote: - - > Incidentally the code is written to work like this : - > - > while (read X lines in file){ - > Process those lines. - > write lines to DB. - > } - - Unless you're selecting from multiple database tables

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:30:14PM -0600, Scott Marlowe wrote: - On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote: - > - > I'm logging via syslog, I've had trouble with that before. when i moved to syslog-ng - > on my dev environments that mostly resoved the probelm for me. but these machines

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:28 PM, David Kerr wrote: > > I'm logging via syslog, I've had trouble with that before. when i moved to > syslog-ng > on my dev environments that mostly resoved the probelm for me. but these > machines > still have vanilla syslog. Yea, I almost always log directly via

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 12:23:51PM -0600, Scott Marlowe wrote: - On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote: - > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote: - > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - > - - > - You can absolutely use copy if you like bu

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:20 PM, David Kerr wrote: > On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote: > - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: > - > - > that thought occured to me while I was testing this. I ran a vacuumdb -z > - > on my database during the load and

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 02:12:15PM -0400, Nikolas Everett wrote: - On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: - - > that thought occured to me while I was testing this. I ran a vacuumdb -z - > on my database during the load and it didn't impact performance at all. - > - > Incidentally the

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 12:15 PM, Scott Marlowe wrote: > On Tue, Apr 20, 2010 at 11:39 AM, David Kerr wrote: >> Howdy all, >> >> I've got a huge server running just postgres. It's got 48 cores and 256GB of >> ram. Redhat 5.4, Postgres 8.3.9. >> 64bit OS. No users currently. > > What's your IO su

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Kris Jurka
On Tue, 20 Apr 2010, Nikolas Everett wrote: You can absolutely use copy if you like but you need to use a non-standard jdbc driver:  kato.iki.fi/sw/db/postgresql/jdbc/copy/.  I've used it in the past and it worked. Copy support has been added to the 8.4 driver. Kris Jurka -- Sent via pgsql

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Kevin Grittner
David Kerr wrote: > Incidentally the code is written to work like this : > > while (read X lines in file){ > Process those lines. > write lines to DB. > } Unless you're selecting from multiple database tables in one query, effective_cache_size shouldn't make any difference. There's probably

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 11:39 AM, David Kerr wrote: > Howdy all, > > I've got a huge server running just postgres. It's got 48 cores and 256GB of > ram. Redhat 5.4, Postgres 8.3.9. > 64bit OS. No users currently. What's your IO subsystem look like? What did vmstat actually say? -- Sent via pg

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Robert Haas
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: > that thought occured to me while I was testing this. I ran a vacuumdb -z > on my database during the load and it didn't impact performance at all. The window to run ANALYZE usefully is pretty short. If you run it before the load is complete, y

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Nikolas Everett
On Tue, Apr 20, 2010 at 2:03 PM, David Kerr wrote: > that thought occured to me while I was testing this. I ran a vacuumdb -z > on my database during the load and it didn't impact performance at all. > > Incidentally the code is written to work like this : > > while (read X lines in file){ > Proc

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
On Tue, Apr 20, 2010 at 01:44:18PM -0400, Robert Haas wrote: - On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote: - > My initial thought was hardware issues so we got sar, vmstat, etc all running on the box and they didn't give - > any indication that we had resource issues. - > - > So I decided

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Robert Haas
On Tue, Apr 20, 2010 at 1:39 PM, David Kerr wrote: > Howdy all, > > I've got a huge server running just postgres. It's got 48 cores and 256GB of > ram. Redhat 5.4, Postgres 8.3.9. > 64bit OS. No users currently. > > I've got a J2EE app that loads data into the DB, it's got logic behind it so > i

Re: [PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread Joshua D. Drake
On Tue, 2010-04-20 at 10:39 -0700, David Kerr wrote: > Howdy all, > > I've got a huge server running just postgres. It's got 48 cores and 256GB of > ram. Redhat 5.4, Postgres 8.3.9. > 64bit OS. No users currently. > > I've got a J2EE app that loads data into the DB, it's got logic behind it so

[PERFORM] Very high effective_cache_size == worse performance?

2010-04-20 Thread David Kerr
Howdy all, I've got a huge server running just postgres. It's got 48 cores and 256GB of ram. Redhat 5.4, Postgres 8.3.9. 64bit OS. No users currently. I've got a J2EE app that loads data into the DB, it's got logic behind it so it's not a simple bulk load, so i don't think we can use copy. Bas

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke wrote: > AFAICT from the Java end, ResultSet.close() is supposed to be > final. For that ResultSet. That doesn't mean a ResultSet defines a cursor. Such methods as setCursorName, setFetchSize, and setFetchDirection are associated with a Statement. Think of the ResultSet as the re

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kris Jurka
On Mon, 19 Apr 2010, Dave Crooke wrote: Statement.close() appears to get the job done (in my envrionment, PG's driver never sees a Connection.close() because of DBCP). I'd consider the fact that ResultSet.close() does not release the implicit cursor to be something of a bug, but it may well h

Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
AFAICT from the Java end, ResultSet.close() is supposed to be final. There is no way I know of in JDBC to get a handle back to the cursor on the server side once you have made this call - in fact, its sole purpose is to inform the server in a timely fashion that this cursor is no longer required, s

Re: [PERFORM] Planner not using column limit specified for one column for another column equal to first

2010-04-20 Thread Hannu Krosing
On Fri, 2010-04-16 at 11:02 +0300, Віталій Тимчишин wrote: > Hello. > > > I have a query that performs very poor because there is a limit on > join column that is not applied to other columns: > > > select * from company this_ left outer join company_tag this_1_ on > this_.id=this_1_.company_id

[PERFORM] HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
Hey folks I am trying to do a full table scan on a large table from Java, using a straightforward "select * from foo". I've run into these problems: 1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, i

Re: [PERFORM] significant slow down with various LIMIT

2010-04-20 Thread norn
Kevin, thanks for your time! Here the requested tests. > (1) Try it without the ORDER BY clause and the LIMIT. W/o the 'order by' it works instantly (about 1ms!) Limit (cost=0.00..3.59 rows=5 width=4) (actual time=0.127..0.229 rows=5 loops=1) -> Nested Loop (cost=0.00..277863.53 rows=386544

[PERFORM] stats collector suddenly causing lots of IO

2010-04-20 Thread Chris
I have a lot of centos servers which are running postgres. Postgres isn't used that heavily on any of them, but lately, the stats collector process keeps causing tons of IO load. It seems to happen only on servers with centos 5. The versions of postgres that are running are: 8.1.18 8.2.6 8.3.1 8

[JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re: [PERFORM] Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Kevin Grittner
Dave Crooke wrote: > I'd consider the fact that ResultSet.close() does not release the > implicit cursor to be something of a bug What's your reasoning on that? The definitions of cursors in the spec, if memory serves, allow a cursor to be closed and re-opened; why would this be treated diffe

[PERFORM] Dbt2 with postgres i ssues on CentOS-5.3‏

2010-04-20 Thread MUHAMMAD ASIF
Hi, I am using dbt2 on Linux 64 (CentOS release 5.3 (Final)) . I have compiled latest postgresql-8.4.3 code on the machine and run dbt2 against it. I am little confused about the results. I ran dbt2 with the following configuration i.e. DBT2 Options : WAREHOUSES=75 DB_CONNECTIONS=20