Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Greg Smith
On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote: I drop checkpoint_timeout to 1min and turn on log_checkpoint: 2011-10-10 14:18:48 BRT LOG: checkpoint complete: wrote 6885 buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=29.862 s, sync=28.466 s,

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Greg Smith
On 10/10/2011 12:14 PM, Leonardo Francalanci wrote: database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data straight to disk. Without that background trickle, you now have a flood that only the highest-end disk controller or a backing-store full of SSDs or PCIe NVRAM

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Leonardo Francalanci
checkpoint_completion_targets spreads out the writes to disk.  PostgreSQL  doesn't make any attempt yet to spread out the sync calls.  On a busy server, what can happen is that the whole OS write cache fills with dirty data--none of which is written out to disk because of the high kernel

[PERFORM] Rapidly finding maximal rows

2011-10-11 Thread James Cranch
I have a slow query, based on the problem of finding the set of rows which are maximal in some sense. I expect that there is either a more intelligent way to write it, or that one could make some indexes that would speed it up. I've tried various indexes, and am not getting anywhere. I'd be

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Ivan Voras
On 11/10/2011 00:02, Samuel Gendler wrote: The original question doesn't actually say that performance has gone down, only that cpu utilization has gone up. Presumably, with lots more RAM, it is blocking on I/O a lot less, so it isn't necessarily surprising that CPU utilization has gone up.

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread alexandre - aldeia digital
Em 10-10-2011 23:19, Claudio Freire escreveu: On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler sgend...@ideasculptor.com wrote: The original question doesn't actually say that performance has gone down, only that cpu utilization has gone up. Presumably, with lots more RAM, it is blocking on

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread alexandre - aldeia digital
Em 11-10-2011 03:42, Greg Smith escreveu: On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote: I drop checkpoint_timeout to 1min and turn on log_checkpoint: 2011-10-10 14:18:48 BRT LOG: checkpoint complete: wrote 6885 buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Luca Tettamanti
On Mon, Oct 10, 2011 at 3:26 PM, alexandre - aldeia digital adald...@gmail.com wrote: Hi, Yesterday, a customer increased the server memory from 16GB to 48GB. A shot in the dark... what is the content of /proc/mtrr? Luca -- Sent via pgsql-performance mailing list

Re: [PERFORM] Adding more memory = hugh cpu load

2011-10-11 Thread Greg Smith
On 10/11/2011 04:57 AM, Leonardo Francalanci wrote: In fact, shouldn't those things be explained in the WAL Configuration section of the manual? It looks as important as configuring Postgresql itself... And: that applies to Linux. What about other OS, such as Solaris and FreeBSD? There's

[PERFORM] Query tuning help

2011-10-11 Thread CS DBA
Hi all ; I'm trying to tune a difficult query. I have 2 tables: cust_acct (9million rows) cust_orders (200,000 rows) Here's the query: SELECT a.account_id, a.customer_id, a.order_id, a.primary_contact_id, a.status, a.customer_location_id, a.added_date, o.agent_id,

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread alexandre - aldeia digital
Hi, About 3 hours ago, the client contacted the Dell and they suggested 2 things: 1) Update the baseboard firmware (the only component that haven't updated yesterday). 2) Change all memory chips to new others, instead of maintain the old (16 GB) + new (32 GB). After do this, until now,

Re: [PERFORM] Query tuning help

2011-10-11 Thread Pavel Stehule
Hello please, send EXPLAIN ANALYZE output instead. Regards Pavel Stehule 2011/10/11 CS DBA cs_...@consistentstate.com: Hi all ; I'm trying to tune a difficult query. I have 2 tables: cust_acct (9million rows) cust_orders (200,000 rows) Here's the query: SELECT     a.account_id,

Re: [PERFORM] Query tuning help

2011-10-11 Thread Szymon Guz
On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com wrote: Hi all ; I'm trying to tune a difficult query. I have 2 tables: cust_acct (9million rows) cust_orders (200,000 rows) Here's the query: SELECT a.account_id, a.customer_id, a.order_id, a.primary_contact_id,

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread Claudio Freire
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital adald...@gmail.com wrote: 2) Change all memory chips to new others, instead of maintain the old (16 GB) + new (32 GB). Of course, mixing disables double/triple/whatuple channel, and makes your memory subsystem correspondingly slower.

Re: [PERFORM] Query tuning help

2011-10-11 Thread CS DBA
On 10/11/2011 12:02 PM, Pavel Stehule wrote: Hello please, send EXPLAIN ANALYZE output instead. Regards Pavel Stehule 2011/10/11 CS DBAcs_...@consistentstate.com: Hi all ; I'm trying to tune a difficult query. I have 2 tables: cust_acct (9million rows) cust_orders (200,000 rows) Here's

Re: [PERFORM] should i expected performance degradation over time

2011-10-11 Thread Anibal David Acosta
Do you know if check_postgresql.pl can run on windows (with perl installed)? Because our postgres installation is running on a Windows 2008 R2 server but can't find any tool like this for windows :( Thanks! -Mensaje original- De: Scott Marlowe [mailto:scott.marl...@gmail.com] Enviado

Re: [PERFORM] Query tuning help

2011-10-11 Thread CS DBA
On 10/11/2011 12:03 PM, Szymon Guz wrote: On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com mailto:cs_...@consistentstate.com wrote: Hi all ; I'm trying to tune a difficult query. I have 2 tables: cust_acct (9million rows) cust_orders (200,000 rows)

Re: [PERFORM] Query tuning help

2011-10-11 Thread Pavel Stehule
 Hash Join  (cost=154.46..691776.11 rows=10059626 width=100) (actual time=5.191..37551.360 rows=10063432 loops=1)    Hash Cond: (a.order_id = o.order_id)    -  Seq Scan on cust_acct a  (cost=0.00..540727.26 rows=10059626 width=92) (actual time=0.022..18987.095 rows=10063432 loops=1)    - 

Re: [PERFORM] postgresql query runtime

2011-10-11 Thread Pavel Stehule
2011/10/11 Radhya sahal rad_cs_2...@yahoo.com: Hi I want to know how can i measure runtime query in postgresql if i use command line psql? not explain rutime for the query such as the runtime which appear in pgadmin ? such as Total query runtime: 203 ms. Hello use \timing \? is your

Re: [PERFORM] postgresql query runtime

2011-10-11 Thread Szymon Guz
On 11 October 2011 21:08, Radhya sahal rad_cs_2...@yahoo.com wrote: Hi I want to know how can i measure runtime query in postgresql if i use command line psql? not explain rutime for the query such as the runtime which appear in pgadmin ? such as Total query runtime: 203 ms. run this in

Re: [PERFORM] postgresql query runtime

2011-10-11 Thread Szymon Guz
On 11 October 2011 21:13, Szymon Guz mabew...@gmail.com wrote: On 11 October 2011 21:08, Radhya sahal rad_cs_2...@yahoo.com wrote: Hi I want to know how can i measure runtime query in postgresql if i use command line psql? not explain rutime for the query such as the runtime which appear

Re: [PERFORM] postgresql query runtime

2011-10-11 Thread Kevin Grittner
Radhya sahal rad_cs_2...@yahoo.com wrote: I want to know how can i measure runtime query in postgresql if i use command line psql? \timing on -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription:

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread alexandre - aldeia digital
Em 11-10-2011 15:05, Claudio Freire escreveu: On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital adald...@gmail.com wrote: 2) Change all memory chips to new others, instead of maintain the old (16 GB) + new (32 GB). Of course, mixing disables double/triple/whatuple channel, and

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread Claudio Freire
On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital adald...@gmail.com wrote: The initial change (add more memory) are maded by a technical person of Dell and him told us that he use the same especification in memory chips. But, you know how it works... ;) Yeah, but different size ==

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread Arjen van der Meijden
On 11-10-2011 20:05 Claudio Freire wrote: On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital adald...@gmail.com wrote: 2) Change all memory chips to new others, instead of maintain the old (16 GB) + new (32 GB). Of course, mixing disables double/triple/whatuple channel, and makes

Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread Claudio Freire
On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden acmmail...@tweakers.net wrote: That really depends on the chipset/server. The current intel E56xx-chips (and previous E55xx) basically just expect groups of 3 modules per processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4

[PERFORM] Composite keys

2011-10-11 Thread Carlo Stonebanks
Excuse the noob question, I couldn't find any reading material on this topic. Let's say my_table has two fields, pkey_id and another_id. The primary key is pkey_id and of course indexed. Then someone adds a composite index on btree(pkey_id, another_id). Question 1) Is there any benefit

Re: [PERFORM] Rapidly finding maximal rows

2011-10-11 Thread bricklen
On Tue, Oct 11, 2011 at 3:16 AM, James Cranch jd...@cam.ac.uk wrote: This is EXPLAIN ANALYZEd here:  http://explain.depesz.com/s/EiS Sort Method: external merge Disk: 35712kB SOFTWARE AND HARDWARE = I'm running PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC

Re: [PERFORM] Composite keys

2011-10-11 Thread Claudio Freire
On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks stonec.regis...@sympatico.ca wrote: Question 2) Regardless of the answer to Question 1 - if another_id is not guaranteed to be unique, whereas pkey_id is – there any value to changing the order of declaration (more generally, is there a

Re: [PERFORM] Rapidly finding maximal rows

2011-10-11 Thread Dave Crooke
Hi James I'm guessing the problem is that the combination of using a view and the way the view is defined with an in-line temporary table is too complex for the planner to introspect into, transform and figure out the equivalent direct query, and so it's creating that entire temporary table

Re: [PERFORM] Composite keys

2011-10-11 Thread Dave Crooke
Claudio is on point, I'll be even more pointed If pkey_id truly is a primary key in the database sense of the term, and thus unique, then IIUC there is no circumstance in which your composite index would ever even get used ... all it's doing is slowing down writes :-) If the query is

Re: [PERFORM] Composite keys

2011-10-11 Thread Carlo Stonebanks
Thanks Dave Claudio. Unfortunately, my specific example had a primary key in it (based on a real-world case) but this kind of distracted from the general point. So with PG I will stick to the general SQL rule that IF I use compound keys then we have the most selective columns to the