On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:
I drop checkpoint_timeout to 1min and turn on log_checkpoint:
2011-10-10 14:18:48 BRT LOG: checkpoint complete: wrote 6885
buffers (1.1%); 0 transaction log file(s) added, 0 removed, 1
recycled; write=29.862 s, sync=28.466 s,
On 10/10/2011 12:14 PM, Leonardo Francalanci wrote:
database makes the fsync call, and suddenly the OS wants to flush 2-6GB of data
straight to disk. Without that background trickle, you now have a flood that
only the highest-end disk controller or a backing-store full of SSDs or PCIe
NVRAM
checkpoint_completion_targets spreads out the writes to disk. PostgreSQL
doesn't make any attempt yet to spread out the sync calls. On a busy
server, what can happen is that the whole OS write cache fills with dirty
data--none of which is written out to disk because of the high kernel
I have a slow query, based on the problem of finding the set of rows which
are maximal in some sense. I expect that there is either a more intelligent
way to write it, or that one could make some indexes that would speed it
up. I've tried various indexes, and am not getting anywhere.
I'd be
On 11/10/2011 00:02, Samuel Gendler wrote:
The original question doesn't actually say that performance has gone down,
only that cpu utilization has gone up. Presumably, with lots more RAM, it is
blocking on I/O a lot less, so it isn't necessarily surprising that CPU
utilization has gone up.
Em 10-10-2011 23:19, Claudio Freire escreveu:
On Tue, Oct 11, 2011 at 12:02 AM, Samuel Gendler
sgend...@ideasculptor.com wrote:
The original question doesn't actually say that performance has gone down,
only that cpu utilization has gone up. Presumably, with lots more RAM, it is
blocking on
Em 11-10-2011 03:42, Greg Smith escreveu:
On 10/10/2011 01:31 PM, alexandre - aldeia digital wrote:
I drop checkpoint_timeout to 1min and turn on log_checkpoint:
2011-10-10 14:18:48 BRT LOG: checkpoint complete: wrote 6885 buffers
(1.1%); 0 transaction log file(s) added, 0 removed, 1 recycled;
On Mon, Oct 10, 2011 at 3:26 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
Hi,
Yesterday, a customer increased the server memory from 16GB to 48GB.
A shot in the dark... what is the content of /proc/mtrr?
Luca
--
Sent via pgsql-performance mailing list
On 10/11/2011 04:57 AM, Leonardo Francalanci wrote:
In fact, shouldn't those things be explained in the WAL
Configuration section
of the manual? It looks as important as configuring Postgresql itself...
And: that applies to Linux. What about other OS, such as Solaris and FreeBSD?
There's
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
a.status, a.customer_location_id, a.added_date,
o.agent_id,
Hi,
About 3 hours ago, the client contacted the Dell and they suggested 2
things:
1) Update the baseboard firmware (the only component that haven't
updated yesterday).
2) Change all memory chips to new others, instead of maintain the old
(16 GB) + new (32 GB).
After do this, until now,
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA cs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id,
On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem correspondingly slower.
On 10/11/2011 12:02 PM, Pavel Stehule wrote:
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBAcs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's
Do you know if check_postgresql.pl can run on windows (with perl installed)?
Because our postgres installation is running on a Windows 2008 R2 server but
can't find any tool like this for windows :(
Thanks!
-Mensaje original-
De: Scott Marlowe [mailto:scott.marl...@gmail.com]
Enviado
On 10/11/2011 12:03 PM, Szymon Guz wrote:
On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com
mailto:cs_...@consistentstate.com wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Hash Join (cost=154.46..691776.11 rows=10059626 width=100) (actual
time=5.191..37551.360 rows=10063432 loops=1)
Hash Cond: (a.order_id =
o.order_id)
- Seq Scan on cust_acct a (cost=0.00..540727.26 rows=10059626
width=92) (actual time=0.022..18987.095 rows=10063432
loops=1)
-
2011/10/11 Radhya sahal rad_cs_2...@yahoo.com:
Hi
I want to know how can i measure runtime query in postgresql if i use
command line psql?
not explain rutime for the query such as the runtime which appear in pgadmin
?
such as Total query runtime: 203 ms.
Hello
use
\timing
\? is your
On 11 October 2011 21:08, Radhya sahal rad_cs_2...@yahoo.com wrote:
Hi
I want to know how can i measure runtime query in postgresql if i use
command line psql?
not explain rutime for the query such as the runtime which appear in
pgadmin ?
such as Total query runtime: 203 ms.
run this in
On 11 October 2011 21:13, Szymon Guz mabew...@gmail.com wrote:
On 11 October 2011 21:08, Radhya sahal rad_cs_2...@yahoo.com wrote:
Hi
I want to know how can i measure runtime query in postgresql if i use
command line psql?
not explain rutime for the query such as the runtime which appear
Radhya sahal rad_cs_2...@yahoo.com wrote:
I want to know how can i measure runtime query in postgresql if i
use command line psql?
\timing on
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Em 11-10-2011 15:05, Claudio Freire escreveu:
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and
On Tue, Oct 11, 2011 at 5:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
The initial change (add more memory) are maded by a technical person of Dell
and him told us that he use the same especification in memory chips.
But, you know how it works... ;)
Yeah, but different size ==
On 11-10-2011 20:05 Claudio Freire wrote:
On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
adald...@gmail.com wrote:
2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).
Of course, mixing disables double/triple/whatuple channel, and makes
On Tue, Oct 11, 2011 at 5:33 PM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
That really depends on the chipset/server. The current intel E56xx-chips
(and previous E55xx) basically just expect groups of 3 modules per
processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4
Excuse the noob question, I couldn't find any reading material on this
topic.
Let's say my_table has two fields, pkey_id and another_id. The primary key
is pkey_id and of course indexed.
Then someone adds a composite index on btree(pkey_id, another_id).
Question 1) Is there any benefit
On Tue, Oct 11, 2011 at 3:16 AM, James Cranch jd...@cam.ac.uk wrote:
This is EXPLAIN ANALYZEd here:
http://explain.depesz.com/s/EiS
Sort Method: external merge Disk: 35712kB
SOFTWARE AND HARDWARE
=
I'm running PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC
On Tue, Oct 11, 2011 at 5:16 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
Question 2) Regardless of the answer to Question 1 - if another_id is not
guaranteed to be unique, whereas pkey_id is – there any value to changing
the order of declaration (more generally, is there a
Hi James
I'm guessing the problem is that the combination of using a view and the way
the view is defined with an in-line temporary table is too complex for the
planner to introspect into, transform and figure out the equivalent direct
query, and so it's creating that entire temporary table
Claudio is on point, I'll be even more pointed
If pkey_id truly is a primary key in the database sense of the term, and
thus unique, then IIUC there is no circumstance in which your composite
index would ever even get used ... all it's doing is slowing down writes :-)
If the query is
Thanks Dave Claudio.
Unfortunately, my specific example had a primary key in it (based on a
real-world case) but this kind of distracted from the general point.
So with PG I will stick to the general SQL rule that IF I use compound keys
then we have the most selective columns to the
32 matches
Mail list logo