Re: [PERFORM] Configuration settings for 32GB RAM server

2006-12-04 Thread Joshua Marsh
On 12/4/06, Mark Lonsdale [EMAIL PROTECTED] wrote: Maintenance_work_mem = 1048576 – Figured Id allocate 1GB for this. Do you know how often and when you will be creating indexes or clustering? We set ours to 2GB because of the performance gains. We've also thought about testing it at 4GB.

Re: [PERFORM] start up cost estimate

2006-11-18 Thread Joshua Marsh
On 11/18/06, rakesh kumar [EMAIL PROTECTED] wrote: Hi , I wanted to know , how the start up cost is computed in postgresql . can u give me an example to illustrate the estimation of start up cost . thanku raa . It would

Re: [PERFORM] Postgresql 8.1.4 - performance issues for select on view using max

2006-10-18 Thread Joshua Marsh
On 10/18/06, Ioana Danes [EMAIL PROTECTED] wrote: # explain select max(transid) from public.transaction;QUERYPLAN -- Result(cost=0.04..0.05 rows=1 width=0) InitPlan -Limit(cost=0.00..0.04 rows=1

Re: [PERFORM] Hints proposal

2006-10-12 Thread Joshua Marsh
On 10/12/06, Jim C. Nasby [EMAIL PROTECTED] wrote: Posting here instead of hackers since this is where the thread gotstarted...The argument has been made that producing a hints system will be as hard as actually fixing the optimizer. There's also been clamoring for anactual proposal, so here's one

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-14 Thread Joshua Marsh
Wow, that correlation value is *way* away from order.If they werereally in exact order by dsiacctno then I'd expect to see 1.0 inthat column.Can you take another look at the tables and confirmthe ordering?Does the correlation change if you do an ANALYZE on thetables?(Some small change is to be

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-13 Thread Joshua Marsh
On 9/13/06, Jim C. Nasby [EMAIL PROTECTED] wrote: On Tue, Sep 12, 2006 at 04:17:34PM -0600, Joshua Marsh wrote: data=# EXPLAIN SELECT v.phonedirect, v.editdrop, EXTRACT (EPOCH FROM v.custdate), EXTRACT (YEAR FROM s.custdate) || '-' || EXTRACT (MONTH FROM s.custdate) FROM view_505 v INNER JOIN

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-13 Thread Joshua Marsh
On 9/13/06, Jeff Davis [EMAIL PROTECTED] wrote: On Wed, 2006-09-13 at 08:49 -0600, Joshua Marsh wrote: That seems to have done it. Are there any side effects to this change? I read about random_page_cost in the documentation and it seems like this is strictly for planning. All the tables

Re: [PERFORM] Query Progress (was: Performance With Joins on Large Tables)

2006-09-13 Thread Joshua Marsh
On 9/13/06, Bucky Jordan [EMAIL PROTECTED] wrote: Setting to 0.1 finally gave me the result I was looking for. I know that the index scan is faster though. The seq scan never finished (i killed it after 24+ hours) and I'm running the query now with indexes and it's progressing nicely (will

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-13 Thread Joshua Marsh
Hmm... that sounds bad. I'm sure your system will always choose indexes with that value. Is it overestimating the cost of using indexes or underestimating the cost of a seq scan, or both? Maybe explain with the 0.1 setting will help? Regards, Jeff Davis data=# explain SELECT

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-13 Thread Joshua Marsh
On 9/13/06, Terje Elde [EMAIL PROTECTED] wrote: Jeff Davis wrote: Is it overestimating the cost of using indexes or underestimating the cost of a seq scan, or both? Maybe explain with the 0.1 setting will help? If enable_seqscan is off, and cost is still set to 1, it could be that

Re: [PERFORM] Performance With Joins on Large Tables

2006-09-13 Thread Joshua Marsh
Are the tables perhaps nearly in order by the dsiacctno fields? If that were the case, and the planner were missing it for some reason, these results would be plausible. BTW, what are you using for work_mem, and how does that compare to your available RAM? regards, tom

[PERFORM] Performance With Joins on Large Tables

2006-09-12 Thread Joshua Marsh
I am having problems performing a join on two large tables. It seems to only want to use a sequential scan on the join, but that method seems to be slower than an index scan. I've never actually had it complete the sequential scan because I stop it after 24+ hours. I've run joins against large

Re: [PERFORM] Hardware/OS recommendations for large databases (

2005-11-17 Thread Joshua Marsh
On 11/17/05, William Yu [EMAIL PROTECTED] wrote: No argument there.But it's pointless if you are IO bound.Why would you just accept we're IO bound, nothing we can do? I'd doeverything in my power to make my app go from IO bound to CPU bound -- whether by optimizing my code or buying more

Re: [PERFORM] Help speeding up delete

2005-11-14 Thread Joshua Marsh
On 11/14/05, Steve Wampler [EMAIL PROTECTED] wrote: However, even that seems to have a much higher cost than I'd expect: lab.devel.configdb=# explain delete from tmp_table2 where id in(select id from tmp_table2 where name='obsid' and value = 'oid080505'); NOTICE:QUERY PLAN: Seq Scan on

Re: [PERFORM] Large Database Performance suggestions

2004-10-26 Thread Joshua Marsh
Joshua Marsh wrote: Hello everyone, I am currently working on a data project that uses PostgreSQL extensively to store, manage and maintain the data. We haven't had any problems regarding database size until recently. The three major tables we use never get bigger than 10 million records

Re: [PERFORM] Slow query

2004-10-23 Thread Joshua Marsh
Any time you run subqueries, it's going to slow down the update process a lot. Each record that is updated in source_song_title runs two additional queries. When I do large updates like this, I usualy Run a transaction that will select all the new data into a new table on a join. For example

[PERFORM] Large Database Performance suggestions

2004-10-21 Thread Joshua Marsh
with singular processes. We will have parallel process running, but it's more important that the speed of each process be faster than several parallel processes at once. Any help would be greatly appreciated! Thanks, Joshua Marsh P.S. Off-topic, I have a few invitations to gmail. If anyone would like