Re: [PERFORM] Poor query performance

2009-07-15 Thread Greg Stark
On Thu, Jul 9, 2009 at 10:35 PM, Alexa...@liivid.com wrote: Forgot to add: postg...@ec2-75-101-128-4:~$ psql --version psql (PostgreSQL) 8.3.5 How is the index sl_city_etc defined? -- greg http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-performance mailing list

Re: [PERFORM] Poor query performance

2009-07-15 Thread Greg Stark
On Wed, Jul 15, 2009 at 8:51 AM, Alexa...@liivid.com wrote: Also posted this to the list.  Thanks for your answer - still struggling. Staying on-list is always preferred. How is the index  sl_city_etc defined?         Index public.sl_city_etc    Column    |            Type

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread Craig Ringer
On Wed, 2009-07-15 at 12:10 +0900, ning wrote: Hi, I am transplanting an application to use PostgreSQL8.2.4 instead of DB2 9.1. CLI was used to connect to DB2, and ODBC is used to connect to PostgreSQL. The query statement is as follows: PostgreSQL cost nearly the same time but DB2 ran 30

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread Craig Ringer
On Wed, 2009-07-15 at 12:10 +0900, ning wrote: First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds Actually, on second thoughts that looks a lot like DB2 is caching the query results and is just returning the

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread Greg Stark
On Wed, Jul 15, 2009 at 9:27 AM, Craig Ringercr...@postnewspapers.com.au wrote: On Wed, 2009-07-15 at 12:10 +0900, ning wrote: First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds Actually, on second thoughts

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread ning
Hi Craig, The log is really long, but I compared the result of explain analyze for first and later executions, except for 3 time=XXX numbers, they are identical. I agree with you that PostgreSQL is doing different level of caching, I just wonder if there is any way to speed up PostgreSQL in this

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread ning
Hi Greg, I am doing performance test by running unit test program to compare time used on PostgreSQL and DB2. As you pointed out, there are cases that PostgreSQL is faster. Actually in real world for my application, repeatedly executing same query statement will hardly happen. I am investigating

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Scott Marlowe
On Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roysuvankar@tcs.com wrote: Hi Scott, This is what I have got - In Greenplum, version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 build 4) on i686-pc-linux-gnu, compiled by GCC gcc (GCC) In Postgres, version PostgreSQL 8.3.7, compiled by

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
This mail contains the asked plans : I've done them with the different configurations, as I had done the effort of setting up the whole thing :) Stats were updated between all runs. Each time is the first run of the query (that's what we have in production with bacula) And I added the executor

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Alex Goncharov
,--- You/Suvankar (Mon, 13 Jul 2009 16:53:41 +0530) * | I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB | as well as a Greenplum DB. | | The Primary key is a composite one comprising of 2 columns (so_no, | serial_no). | | The execution of the following query takes

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Alex Goncharov
,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) * | Yes, I have got 2 segments and a master host. So, in a way processing | should be faster in Greenplum. No, it should not: it all depends on your data, SQL statements and setup. In my own experiments, with small amounts of stored data,

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Alvaro Herrera
Marc Cousin escribió: There are other things I am thinking of : maybe it would be better to have sort space on another (and not DBRD'ded) raid set ? we have a quite cheap setup right now for the database, and I think maybe this would help scale better. I can get a filesystem in another

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit : Marc Cousin escribió: There are other things I am thinking of : maybe it would be better to have sort space on another (and not DBRD'ded) raid set ? we have a quite cheap setup right now for the database, and I think maybe this

[PERFORM] Re: [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Alvaro Herrera
Lauris Ulmanis wrote: Hello again! I did test on my local test server I created up 500 000 users in function loop very quickly - within 48 seconds. I did again this script reaching up to 1 billion users - results was the same - 48 seconds. It is very quickly. But problem seems is with

Re: [PERFORM] Poor overall performance unless regular VACUUM FULL

2009-07-15 Thread Scott Marlowe
On Wed, Jul 15, 2009 at 6:51 AM, Wayne Conradwa...@databill.com wrote: On Tue, 14 Jul 2009, Scott Marlowe wrote: Just wondering, which pgsql version, and also, do you have autovacuum turned on? Dang, I should have said in my initial message.  8.3.6, and autovacuum is turned on and has

[PERFORM] Re: [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Alvaro Herrera
toruvinn wrote: On Wed, 15 Jul 2009 16:02:09 +0200, Alvaro Herrera alvhe...@commandprompt.com wrote: My bet is on the pg_auth flat file. I doubt we have ever tested the behavior of that code with 1 billion users ... I was always wondering, though, why PostgreSQL uses this approach and not

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes: toruvinn wrote: I was always wondering, though, why PostgreSQL uses this approach and not its catalogs. It does use the catalog for most things. THe flatfile is used for the situations where the catalogs are not yet ready to be read. Now

Re: [PERFORM] cluster index on a table

2009-07-15 Thread Ibrahim Harrani
Hi, thanks for your suggestion. Is there any benefit of setting fillfactor to 70 or 80 on this table? On Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowescott.marl...@gmail.com wrote: As another poster pointed out, you cluster on ONE index and one index only.  However, you can cluster on a

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Marko Kreen
On 7/15/09, Tom Lane t...@sss.pgh.pa.us wrote: Alvaro Herrera alvhe...@commandprompt.com writes: toruvinn wrote: I was always wondering, though, why PostgreSQL uses this approach and not its catalogs. It does use the catalog for most things. THe flatfile is used for the

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread David Wilson
On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreenmark...@gmail.com wrote: From security standpoint, wasting more cycles on bad passwords is good, as it decreases the rate bruteforce password scanning can happen. And I cannot imagine a scenario where performance on invalid logins can be

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Tom Lane
David Wilson david.t.wil...@gmail.com writes: On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreenmark...@gmail.com wrote: From security standpoint, wasting more cycles on bad passwords is good, as it decreases the rate bruteforce password scanning can happen. And I cannot imagine a scenario where

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread Marko Kreen
On 7/15/09, David Wilson david.t.wil...@gmail.com wrote: On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreenmark...@gmail.com wrote: From security standpoint, wasting more cycles on bad passwords is good, as it decreases the rate bruteforce password scanning can happen. And I cannot imagine

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Scott Mead
On Wed, Jul 15, 2009 at 9:18 AM, Alex Goncharov alex-goncha...@comcast.netwrote: ,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) * | Yes, I have got 2 segments and a master host. So, in a way processing | should be faster in Greenplum. No, it should not: it all depends on your data,

Re: [PERFORM] CREATE USER command slows down when user count per server reaches up to 500 000

2009-07-15 Thread Haszlakiewicz, Eric
-Original Message- From: pgsql-performance-ow...@postgresql.org When users count in Postgres database reaches up to 500 000 - database command of creating users 'CREATE USER' slows down to 5-10 seconds per user. What could be a reason of this problem and is there any solution how to

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread Mike Ivanov
ning wrote: The log is really long, Which usually signals a problem with the query. but I compared the result of explain analyze for first and later executions, except for 3 time=XXX numbers, they are identical. They are supposed to be identical unless something is really badly broken.

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread Andres Freund
On Wednesday 15 July 2009 10:27:50 Craig Ringer wrote: On Wed, 2009-07-15 at 12:10 +0900, ning wrote: First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds Actually, on second thoughts that looks a lot like

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Devin Ben-Hur
Marc Cousin wrote: This mail contains the asked plans : Plan 1 around 1 million records to insert, seq_page_cost 1, random_page_cost 4 - Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual time=23184.196..23184.196 rows=16732049 loops=1) - Seq Scan on

Re: [PERFORM] Poor overall performance unless regular VACUUM FULL

2009-07-15 Thread Scott Carey
On 7/14/09 9:53 PM, David Wilson david.t.wil...@gmail.com wrote: On Mon, Jul 13, 2009 at 3:31 PM, Wayne Conradwa...@databill.com wrote: Howdy.  Some months back, when advised on one of these lists that it should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit this nightly

Re: [PERFORM] cluster index on a table

2009-07-15 Thread Scott Carey
If you have a lot of insert/update/delete activity on a table fillfactor can help. I don't believe that postgres will try and maintain the table in the cluster order however. On 7/15/09 8:04 AM, Ibrahim Harrani ibrahim.harr...@gmail.com wrote: Hi, thanks for your suggestion. Is there any

Re: [PERFORM] Poor overall performance unless regular VACUUM FULL

2009-07-15 Thread Scott Marlowe
On Wed, Jul 15, 2009 at 4:03 PM, Wayne Conradwa...@databill.com wrote: On Tue, 14 Jul 2009, Scott Marlowe wrote: Are you guys doing anything that could be deemed pathological, like full table updates on big tables over and over?  Had an issue last year where a dev left a where clause off an

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Scott Carey
On 7/15/09 4:56 PM, Devin Ben-Hur dben...@whitepages.com wrote: Marc Cousin wrote: This mail contains the asked plans : Plan 1 around 1 million records to insert, seq_page_cost 1, random_page_cost 4 - Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread ning
Hi Mike, Thank you for your explanation. The explain analyze command used is as follows, several integers are bound to '?'. - SELECT oid_,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval FROM (SELECT

Re: [PERFORM] Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1

2009-07-15 Thread ning
Hi Andres, The log for the test you suggested is as follows in PostgreSQL8.2.4, but I cannot find a clue to prove or prove not PostgreSQL is doing plan caching. Best regards, Ning - job=# prepare test_query as SELECT

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Greg Smith
On Mon, 13 Jul 2009, Suvankar Roy wrote: I believe that execution time in greenplum should be less compared to postgres. Well, first off you don't even mention which PostgreSQL or Greenplum version you're comparing, which leaves a lot of variables we can't account for. Second, you'd need

Re: [PERFORM] Performance comparison between Postgres and Greenplum

2009-07-15 Thread Greg Smith
On Wed, 15 Jul 2009, Scott Marlowe wrote: On Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roysuvankar@tcs.com wrote: Hi Scott, This is what I have got - In Greenplum, version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 build 4) on i686-pc-linux-gnu, compiled by GCC gcc (GCC) In Postgres,

Re: [PERFORM] cluster index on a table

2009-07-15 Thread Scott Marlowe
I'd love to see it. On Wed, Jul 15, 2009 at 8:17 PM, Justin Pittsjustinpi...@gmail.com wrote: Is there any interest in adding that (continual/automatic cluster order maintenance) to a future release? On Wed, Jul 15, 2009 at 8:33 PM, Scott Careysc...@richrelevance.com wrote: If you have a lot

[PERFORM] Concurrency issue under very heay loads

2009-07-15 Thread Raji Sridar (raji)
Hi, We use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it. When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit : Marc Cousin wrote: This mail contains the asked plans : Plan 1 around 1 million records to insert, seq_page_cost 1, random_page_cost 4 - Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual

Re: [PERFORM] Concurrency issue under very heay loads

2009-07-15 Thread ramasubramanian
Hi, Are you using automatic sequence increment in table? - Original Message - From: Raji Sridar (raji) To: pgsql-gene...@postgresql.org ; pgsql-performance@postgresql.org Sent: Thursday, July 16, 2009 10:29 AM Subject: [PERFORM] Concurrency issue under very heay loads Hi,