Re: [PERFORM] How to get FK to use new index without restarting the database

2010-12-17 Thread Eric Comeau


On 10-12-16 11:27 AM, Tom Lane wrote: 

Eric Comeau ecom...@signiant.com mailto:ecom...@signiant.com  
writes:
 Is there a way force the db to re-evaluate its execution plan for a FK
 without bouncing the DB?

   PostgreSQL 8.1.17

You don't need to bounce the whole DB, but you will need to start fresh
sessions.  We didn't add automatic invalidation of those plans until 
8.3.

regards, tom lane



We confirmed that disconnecting and reconnecting resolves the issue.

Thanks to all that helped.

Eric



[PERFORM] How to get FK to use new index without restarting the database

2010-12-16 Thread Eric Comeau


Is there a way force the db to re-evaluate its execution plan for a FK 
without bouncing the DB?


 PostgreSQL 8.1.17

In our latest release our developers have implemented some new foreign 
keys but forgot to create indexes on these keys.


The problem surfaced at one of our client installs where a maintenance 
DELETE query was running for over 24 hrs. We have since then identified 
the missing indexes and have sent the client a script to create them, 
but in our testing we could not been able to get postgres to use the new 
index for the FK cascade delete without bouncing the database.


Here is an example of an added fk but missing index

ALTER TABLE scheduled_job_arg ADD CONSTRAINT sjr_scheduled_job_id_fk
  FOREIGN KEY (scheduled_job_id) REFERENCES scheduled_job (id)
 ON UPDATE CASCADE ON DELETE CASCADE;

Thanks in Advance,
Eric



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to get FK to use new index without restarting the database

2010-12-16 Thread Eric Comeau

On 10-12-16 07:34 AM, Jayadevan M wrote:

Hello,

Is there a way force the db to re-evaluate its execution plan for a FK
without bouncing the DB?

   PostgreSQL 8.1.17

In our latest release our developers have implemented some new foreign
keys but forgot to create indexes on these keys.

The problem surfaced at one of our client installs where a maintenance
DELETE query was running for over 24 hrs. We have since then identified
the missing indexes and have sent the client a script to create them,
but in our testing we could not been able to get postgres to use the new



index for the FK cascade delete without bouncing the database.

Did you try analyze? May be it will help.
http://www.postgresql.org/docs/9.0/static/sql-analyze.html


Yes we did. Thanks for the suggestion.



Regards,
Jayadevan





DISCLAIMER:

The information in this e-mail and any attachment is intended only for
the person to whom it is addressed and may contain confidential and/or
privileged material. If you have received this e-mail in error, kindly
contact the sender and destroy all copies of the original communication.
IBS makes no warranty, express or implied, nor guarantees the accuracy,
adequacy or completeness of the information contained in this email or any
attachment and is not liable for any errors, defects, omissions, viruses
or for resultant loss or damage, if any, direct or indirect.









--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to get FK to use new index without restarting the database

2010-12-16 Thread Eric Comeau

On 10-12-16 11:27 AM, Tom Lane wrote:

Eric Comeauecom...@signiant.com  writes:

Is there a way force the db to re-evaluate its execution plan for a FK
without bouncing the DB?



   PostgreSQL 8.1.17


You don't need to bounce the whole DB, but you will need to start fresh
sessions.  We didn't add automatic invalidation of those plans until 8.3.

regards, tom lane



We confirmed that disconnecting and reconnecting resolves the issue.

Thanks to all that helped.

I replied to Tom and the list yesterday from my e-mail, but I don't see 
my reply here, so it must be stuck in the ether somewhere


Eric

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How to achieve sustained disk performance of 1.25 GB write for 5 mins

2010-11-17 Thread Eric Comeau
This is not directly a PostgreSQL performance question but I'm hoping 
some of the chaps that build high IO PostgreSQL servers on here can help.


We build file transfer acceleration s/w (and use PostgreSQL as our 
database) but we need to build a test server that can handle a sustained 
write throughput of 1,25 GB for 5 mins.


Why this number, because we want to push a 10 Gbps network link for 5-8 
mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins 
which would be 400-500 GB.


Note this is just a test server therefore it does not need fault 
tolerance.


Thanks in advance,
Eric

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to achieve sustained disk performance of 1.25 GB write for 5 mins

2010-11-17 Thread Eric Comeau


On 10-11-17 12:28 PM, Merlin Moncure wrote: 

On Wed, Nov 17, 2010 at 9:26 AM, Eric Comeau ecom...@signiant.com 
mailto:ecom...@signiant.com  wrote:
 This is not directly a PostgreSQL performance question but I'm hoping 
some
 of the chaps that build high IO PostgreSQL servers on here can help.

 We build file transfer acceleration s/w (and use PostgreSQL as our 
database)
 but we need to build a test server that can handle a sustained write
 throughput of 1,25 GB for 5 mins.

 Why this number, because we want to push a 10 Gbps network link for 
5-8
 mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins 
which
 would be 400-500 GB.

 Note this is just a test server therefore it does not need fault
 tolerance.

I really doubt you will see 1.25gb/sec over 10gige link.  Even if you
do though, you will hit a number of bottlenecks if you want to see
anything close to those numbers.  Even with really fast storage you
will probably become cpu bound, or bottlenecked in the WAL, or some
other place.

*) what kind of data do you expect to be writing out at this speed?


Large Video files ... our s/w is used to displace FTP.


*) how many transactions per second will you expect to have?


Ideally 1 large file, but it may have to be multiple. We find that if we send 
multiple files it just causes the disk to thrash more so we get better 
throughput by sending one large file.


*) what is the architecture of the client? how many connections will
be open to postgres writing?


Our s/w can do multiple streams, but I believe we get better performance with 1 
stream handling one large file, you could have 4 streams with 4 files in 
flight, but the disk thrashes more... postgres is not be writing the file data, 
our agent reports back to postgres stats on the transfer rate being achieved 
... postgres transactions is not the issue. The client and server are written 
in C and use UDP (with our own error correction) to achieve high network 
throughput as opposed to TCP.


*) how many cores are in this box? what kind?


Well obviously thats part of the equation as well, but its sort of unbounded 
right now not defined, but our s/w is multi-threaded and can make use of the 
multiple cores... so I'll say for now at a minimum 4.



merlin




Re: [PERFORM] Help with duration of statement: EXECUTE unnamed [PREPARE: COMMIT]

2010-10-19 Thread Eric Comeau

On 10-10-18 11:02 AM, Tom Lane wrote:

Mladen Gogalamladen.gog...@vmsinfo.com  writes:

Tom Lane wrote:

My guess would be overstressed disk subsystem.  A COMMIT doesn't require
much except fsync'ing the commit WAL record down to disk ...



Doesn't the commit statement also release all the locks held by the
transaction?


Yeah, and there's a nontrivial amount of other cleanup too, but it all
amounts to just changes in in-memory data structures.  I don't see that
taking five seconds, especially not if commits of similar transactions
usually take much less than that.

regards, tom lane



Thanks for the info. The system is a QA system under load. It is running 
200 jobs per minute, so yes the disk it stressed. Our application 
bundles PG into its install and installs the app and database all on the 
same filesystem. The QA folks probably have lots of logging turned on as 
well.


I am not sure what front-end client is doing the prepare/execute on a 
commit - I found it strange, I'm glad someone else does as well.


The web app is using jboss with connection pooling, but there is a 
scheduler built in C using libpq as well.


Thanks for the hint on deferred fk, I'll check, but I think if that was 
the case it would be happening much more often - like maybe almost all 
commits for this transaction type.


The OS is RH 5.2 64-bit, and I'm surprised they don't have iostat 
installed on it by default. There is vmstat. The load avg is


06:36:49 up 28 days, 15:20,  5 users,  load average: 19.44, 22.59, 22.50

Okay - I'm starting to see other stmts other than just commits taking 
longer than 5 secs sometimes as well now - stress test has been running 
for 3 days now...some commits 17 and 15 secs ouch...


 2010-10-19 05:44:35 EDT [11760]: [10-1] LOG:  duration: 17137.425 ms 
statement: commit
2010-10-19 05:44:36 EDT [10704]: [14-1] LOG:  duration: 14928.903 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-19 05:44:36 EDT [12535]: [1-1] LOG:  duration: 13241.032 ms 
statement: EXECUTE unnamed  [PREPARE:  update scheduled_job set 
active_filename=$1, active_state=$2, begin_time=$3, changed_by=$4, 
changed_on=$5, created_by=$6, created_on=$7, current_run=$8, 
deferred_time=$9, deleted=$10, end_time=$11, expire_at=$12, 
frequency_spec=$13, job_class=$14, contract_id=$15, job_name=$16, 
last_active_status_msg=$17, last_exit_code=$18, package_id=$19, 
perc_denominator=$20, perc_numerator=$21, retry_at=$22, 
scheduled_at=$23, scheduled_state=$24, start_at=$25, states_list=$26, 
timezone=$27, total_runs=$28 where id=$29]
2010-10-19 05:44:41 EDT [11760]: [11-1] LOG:  duration: 6000.118 ms 
statement: commit
2010-10-19 05:44:49 EDT [10704]: [15-1] LOG:  duration: 13804.450 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-19 05:44:49 EDT [12535]: [2-1] LOG:  duration: 13807.317 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-19 05:45:00 EDT [11760]: [12-1] LOG:  duration: 18879.010 ms 
statement: commit
2010-10-19 05:45:18 EDT [10704]: [16-1] LOG:  duration: 28177.626 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-19 05:45:20 EDT [11760]: [13-1] LOG:  duration: 19740.822 ms 
statement: commit
2010-10-19 05:45:20 EDT [13093]: [1-1] LOG:  duration: 20828.412 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]



I do not have a vmstat to look at from when the stmts above executed, 
wish I did, here is vmstat 5, now but at this time everything is 
executing under 5 secs... procs ---memory-- ---swap-- 
-io --system-- -cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy 
id wa st
65  0   3232 340480 166812 221288400 0  1519 1676 50841 70 
30  0  0  0
57  0   3232 273332 166856 221279600 0  1157 1609 52887 69 
31  0  0  0
73  0   3232 320668 166884 221266800 0   781 1458 53420 70 
30  0  0  0
44  0   3232 393240 166900 22132720043  1336 1578 53155 70 
30  0  0  0
42  0   3232 349176 166928 221324400 2   656 1449 52006 70 
30  0  0  0
35  0   3232 299320 166972 221343600 3  1312 1582 51126 75 
25  0  0  0
68  0   3232 265868 167012 221342000 0   739 1484 51982 74 
26  0  0  0
42  0   3232 234672 167048 221244000 2   772 1550 50536 74 
26  0  0  0
72  0   3232 252232 167080 221300400 0  1192 1616 48063 77 
23  0  0  0
56  0   3232 336852 167112 221322000 0   699 1433 50655 78 
22  0  0  0
38  0   3232 302212 167148 221338000 0   786 1578 49895 76 
24  0  0  0
61  0   3232 381884 167180 221326000 6   943 1525 46474 77 
23  0  0  0
66  0   3232 366568 167216 221371600 0  1150 1491 39232 82 
18  0  0  0
93  0   3232 343792 167232 221368000 2   946 1504 39030 82 
18  0  0  0
66  0   3232 377376 167268 221326000 0   954 1427 37206 84 
16  0  0  0
60  0   3232 319552 167288 221295200 0   385 1365 34413 83 
17  0  0  0
53  0   

[PERFORM] Help with duration of statement: EXECUTE unnamed [PREPARE: COMMIT]

2010-10-18 Thread Eric Comeau

We currently have

  log_min_duration_statement = 5000

and are seeing statements like the following logged

2010-10-16 05:55:52 EDT [6334]: [1-1] LOG:  duration: 5572.517 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-16 06:06:24 EDT [26856]: [1-1] LOG:  duration: 5617.866 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-16 06:06:24 EDT [20740]: [13-1] LOG:  duration: 5210.190 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]
2010-10-16 08:24:06 EDT [8743]: [1-1] LOG:  duration: 6487.346 ms 
statement: EXECUTE unnamed  [PREPARE:  COMMIT]


Questions;

1) What do these statements mean?
2) How do I dig deeper to determine why they are taking longer than 5 secs.

Version Info --

select version();
  version 



 PostgreSQL 8.1.17 on x86_64-unknown-linux-gnu, compiled by GCC gcc 
(GCC) 4.1.1 20070105 (Red Hat 4.1.1-52)


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-27 Thread Eric Comeau
 It really has very little impact. It only affects index scans, and
 even then only if effective_cache_size is less than the size of the
 table.

 Essentially, when this kicks in, it models the effect that if you are
 index scanning a table much larger than the size of your cache, you
 might have to reread some blocks that you previously read in during
 *that same index scan*.

 Ok, thanks for clearing that up for me. Still, I think the doc could be
 improved on this point (sorry to be a bit obsessed with that, but I'm one 
 of
 the french translators, so I like the doc to be perfect :) )

Yes, I agree.  I was confused for quite a long time, too, until I read
the code.  I think many people think this value is much more important
than it really is.

(That having been said, I have no current plans to write such a doc
patch myself.)

...Robert

How about adding a comment to the wiki performance page
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance