Re: [PERFORM] Occasional giant spikes in CPU load

2010-04-08 Thread Greg Smith

David Rees wrote:

You need to find out what all those Postgres processes are doing.  You
might try enabling update_process_title and then using ps to figure
out what each instance is using.


That's what the addition of -c to top I suggested does on Linux; it 
shows the updated process titles where the command line is in the 
default config.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs. MegaRAID

2010-04-08 Thread Greg Smith

Scott Carey wrote:

* Change the linux 'readahead' block device parameter to at least 4MB (8192, 
see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.
  
I haven't tested them, but 3ware gives suggestions at 
http://www.3ware.com/kb/Article.aspx?id=14852 for tuning their cards 
properly under FreeBSD.  You cannot get good sequential read performance 
from 3ware's cards without doing something about this at the OS level; 
the read-ahead on the card itself is minimal and certainly a bottleneck.


As for your comments about drives being faster at the front than the 
end, the zcav tool that comes with bonnie++ is a good way to plot that 
out, rather than having to split partitions up and do a bunch of manual 
testing.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Krzysztof Kardas
Hi,
I am using zabbix monitoring software. The backbone database for
zabbix is postgresql 8.1 installed od linux.

Database server has 3GB of RAM, 1 CPU Dual Core and 2 SAS disks in RAID 1.

Zabbix makes a lot of inserts and updates on database.  The problem is
that when autovaccum starts the database freezes.
I am trying to make better performance, I have read a lot of documents
and sites about performance tunning but still no luck.

My current database variables:

 add_missing_from| off
  | Automatically adds missing table references to FROM
 clauses.
 archive_command | unset
  | WAL archiving command.
 australian_timezones| off
  | Interprets ACST, CST, EST, and SAT as Australian ti
me zones.
 authentication_timeout  | 60
  | Sets the maximum time in seconds to complete client
 authentication.
 autovacuum  | on
  | Starts the autovacuum subprocess.
 autovacuum_analyze_scale_factor | 0.1
  | Number of tuple inserts, updates or deletes prior t
o analyze as a fraction of reltuples.
 autovacuum_analyze_threshold| 5000
  | Minimum number of tuple inserts, updates or deletes
 prior to analyze.
 autovacuum_naptime  | 60
  | Time to sleep between autovacuum runs, in seconds.
 autovacuum_vacuum_cost_delay| -1
  | Vacuum cost delay in milliseconds, for autovacuum.
 autovacuum_vacuum_cost_limit| -1
  | Vacuum cost amount available before napping, for au
tovacuum.
 autovacuum_vacuum_scale_factor  | 0.2
  | Number of tuple updates or deletes prior to vacuum
as a fraction of reltuples.
 autovacuum_vacuum_threshold | 10
  | Minimum number of tuple updates or deletes prior to
 vacuum.
 backslash_quote | safe_encoding
  | Sets whether \' is allowed in string literals.
 bgwriter_all_maxpages   | 5
  | Background writer maximum number of all pages to fl
ush per round
 bgwriter_all_percent| 0.333
  | Background writer percentage of all buffers to flus
h per round
 bgwriter_delay  | 200
  | Background writer sleep time between rounds in mill
iseconds
 bgwriter_lru_maxpages   | 5
  | Background writer maximum number of LRU pages to fl
ush per round
 bgwriter_lru_percent| 1
  | Background writer percentage of LRU buffers to flus
h per round
 block_size  | 8192
  | Shows size of a disk block
 bonjour_name| unset
  | Sets the Bonjour broadcast service name.
 check_function_bodies   | on
  | Check function bodies during CREATE FUNCTION.
 checkpoint_segments | 32
  | Sets the maximum distance in log segments between a
utomatic WAL checkpoints.
 checkpoint_timeout  | 300
  | Sets the maximum time in seconds between automatic
WAL checkpoints.
 checkpoint_warning  | 30
  | Logs if filling of checkpoint segments happens more
 frequently than this (in seconds).
 client_encoding | UTF8
  | Sets the client's character set encoding.
 client_min_messages | notice
  | Sets the message levels that are sent to the client
.
 commit_delay| 0
  | Sets the delay in microseconds between transaction
commit and flushing WAL to disk.
 commit_siblings | 5
  | Sets the minimum concurrent open transactions befor
e performing commit_delay.
 config_file | /var/lib/pgsql/data/postgresql.conf
  | Sets the server's main configuration file.
 constraint_exclusion| off
  | Enables the planner to use constraints to optimize
queries.
 cpu_index_tuple_cost| 0.001
  | Sets the planner's estimate of processing cost for
each index tuple (row) during index scan.
 cpu_operator_cost   | 0.0025
  | Sets the planner's estimate of processing cost of e
ach operator in WHERE.
 cpu_tuple_cost  | 0.01
  | Sets the planner's estimate of the cost of processi
ng each tuple (row).
 custom_variable_classes | unset
  | Sets the list of known custom variable classes.
 data_directory  | /var/lib/pgsql/data
  | Sets the server's data directory.
 DateStyle   | ISO, MDY
  | Sets the display format for date and time values.
 db_user_namespace   | off
  | Enables per-database user names.
 deadlock_timeout| 1000
  | The time in milliseconds to wait on lock before che
cking for deadlock.
 debug_pretty_print  | off
  | Indents parse and plan tree displays.
 debug_print_parse   | off
  | Prints the parse tree to the server log.
 debug_print_plan| off
  | Prints the execution plan to server log.
 debug_print_rewritten   | off
  | Prints the parse tree after rewriting to server log
.
 default_statistics_target   | 100
  | Sets the default statistics target.
 default_tablespace  | unset
  | Sets the default tablespace to create tables and in
dexes in.
 

Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Grzegorz Jaśkiewicz
starting with 8.3, there's this new feature called HOT, which helps a lot
when you do loads of updates.
Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much
nicer.
Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a
reason.


Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Merlin Moncure
2010/4/8 Grzegorz Jaśkiewicz gryz...@gmail.com:
 starting with 8.3, there's this new feature called HOT, which helps a lot
 when you do loads of updates.
 Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much
 nicer.
 Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a
 reason.

postgresql 8.2: autovacuum enabled by default
postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of updates)

previous to 8.2, to get good performance on zabbix you need to
aggressively vacuum the heavily updated tables yourself.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Query Optimization

2010-04-08 Thread Kaloyan Iliev Iliev

Hi,
Can anyone suggest why this query so slow.

SELECT version();
   version
- 

PostgreSQL 8.4.2 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) 
4.2.1 20070719  [FreeBSD], 32-bit

(1 row)
explain analyze SELECT  DT.value,
   DT.meassure_date,
   DT.ms_status_id as status_id,
   S.descr_bg as status_bg,
   S.descr_en as status_en,
   VT.id as value_type_id,
   VT.descr_en as value_type_en,
   VT.descr_bg as value_type_bg,
   T.unit as value_type_unit,
   T.name as general_value_type,
   T.ms_db_type_id
   FROM
   ms_data AS DT,
   ms_statuses AS S,
   ms_value_types AS VT,
   ms_types AS T,
   ms_commands_history AS CH
  WHERE  DT.ms_value_type_id = 88  AND
 DT.meassure_date = '2010-04-01 1:00:00' AND
 DT.meassure_date = '2010-04-01 1:10:00' AND
 DT.ms_command_history_id = CH.id AND
 CH.ms_device_id = 7 AND
 DT.ms_value_type_id = VT.id AND
 VT.ms_type_id = T.id AND
 DT.ms_status_id = S.id
  GROUP BY value,
   meassure_date,
   status_id,
   status_bg,
   status_en,
   value_type_id,
   value_type_en,
   value_type_bg,
   value_type_unit,
   general_value_type,
   ms_db_type_id
  ORDER BY meassure_date DESC;
 QUERY PLAN
-- 

Group  (cost=23.93..23.96 rows=1 width=229) (actual 
time=63274.021..63274.021 rows=0 
loops=1)
-  Sort  (cost=23.93..23.94 rows=1 width=229) (actual 
time=63274.016..63274.016 rows=0 loops=1)
   Sort Key: dt.meassure_date, dt.value, dt.ms_status_id, s.descr_bg, 
s.descr_en, vt.id, vt.descr_en, vt.descr_bg, t.unit, t.name, 
t.ms_db_type_id
   Sort Method:  quicksort  Memory: 17kB
   -  Nested Loop  (cost=0.00..23.92 rows=1 width=229) (actual 
time=63273.982..63273.982 rows=0 
loops=1)   
   -  Nested Loop  (cost=0.00..19.64 rows=1 width=165) (actual 
time=63273.977..63273.977 rows=0 
loops=1)   
   -  Nested Loop  (cost=0.00..15.36 rows=1 width=101) 
(actual time=63273.974..63273.974 rows=0 
loops=1)   
   -  Nested Loop  (cost=0.00..11.08 rows=1 width=23) 
(actual time=63273.970..63273.970 rows=0 
loops=1)
   -  Index Scan using 
ms_commands_history_ms_device_id_idx on ms_commands_history ch  
(cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 rows=9807 
loops=1)

 Index Cond: (ms_device_id = 7)
   -  Index Scan using 
ms_data_ms_command_history_id_idx on ms_data dt  (cost=0.00..6.74 rows=1 
width=31) (actual time=3.868..3.868 rows=0 loops=9807)
 Index Cond: 
(dt.ms_command_history_id = ch.id)
 Filter: ((dt.meassure_date = 
'2010-04-01 01:00:00'::timestamp without time zone) AND 
(dt.meassure_date = '2010-04-01 01:10:00'::timestamp without time zone) 
AND (dt.ms_value_type_id = 88))
 -  Index Scan using ms_value_types_pkey on 
ms_value_types vt  (cost=0.00..4.27 rows=1 width=82) (never 
executed) Index Cond: 
(vt.id = 88)
   -  Index Scan using ms_types_pkey on ms_types t  
(cost=0.00..4.27 rows=1 width=72) (never 
executed)  
Index Cond: (t.id = vt.ms_type_id)
 -  Index Scan using ms_statuses_pkey on ms_statuses s  
(cost=0.00..4.27 rows=1 width=68) (never 
executed)Index 
Cond: (s.id = dt.ms_status_id)
Total runtime: 63274.256 ms  


Thanks in advance.

Kaloyan Iliev

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:

Re: [PERFORM] LIMIT causes planner to do Index Scan using a less optimal index

2010-04-08 Thread Joel Jacobson
Actually, swapping the order of the conditions did in fact make some
difference, strange.

I ran the query a couple of times for each variation to see if the
difference in speed was just a coincidence or a pattern. Looks like the
speed really is different.

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1384.399..1384.399
rows=1 loops=1)
 Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
 Total runtime: 1384.431 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1710.164..1710.164
rows=1 loops=1)
 Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))
 Total runtime: 1710.200 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1366.525..1366.525
rows=1 loops=1)
 Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
 Total runtime: 1366.552 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
 QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1685.394..1685.394
rows=1 loops=1)
 Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))
 Total runtime: 1685.423 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1403.903..1403.903
rows=1 loops=1)
 Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
 Total runtime: 1403.931 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1689.014..1689.014
rows=1 loops=1)
   -  Index Scan using transactions_pkey on transactions
 (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1689.012..1689.012
rows=1 loops=1)
 Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))
 Total runtime: 1689.041 ms
(4 rows)

EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
  QUERY
PLAN
--
 Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1378.322..1378.323
rows=1 

[PERFORM] How check execution plan of a function

2010-04-08 Thread Sabin Coanda
Hi there,

I have a function which returns setof record based on a specific query.
I try to check the execution plan of that query, so I write EXPLAIN ANALYZE 
before my select, I call the function and I see the result which shows an 
actual time about 5 seconds. But when I call my function after I remove 
EXPLAIN ANALYZE it takes more than 300 seconds, and I cancel it.

What's happen, or how can I see the function execution plan to optimize it ?

TIA,
Sabin 



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] significant slow down with various LIMIT

2010-04-08 Thread norn
Hi there!

I have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT
getting greater than some value (greater than 3 in my case), query
takes 4-5 secs instead of 0.25ms. All of the necessary indexes are in
place. I have no idea what to do, so any advices are welcome!

Here my queries and explain analyzes;

First Query with LIMIT 3 (fast)
-
explain analyze SELECT core_object.id from core_object INNER JOIN
plugins_plugin_addr ON (core_object.id =
plugins_plugin_addr.oid_id) INNER JOIN plugins_guide_address ON
(plugins_plugin_addr.address_id = plugins_guide_address.id)
WHERE plugins_guide_address.city_id = 4535  ORDER BY
core_object.id DESC LIMIT 3;

 Limit  (cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138
rows=3 loops=1)
   -  Merge Join  (cost=0.00..1098182.56 rows=344125 width=4) (actual
time=0.088..0.136 rows=3 loops=1)
 Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)
 -  Nested Loop  (cost=0.00..972804.02 rows=344125 width=4)
(actual time=0.056..0.095 rows=3 loops=1)
   -  Index Scan Backward using
plugins_plugin_addr_oid_id on plugins_plugin_addr
(cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..0.032
rows=3 loops=1)
   -  Index Scan using plugins_guide_address_pkey on
plugins_guide_address  (cost=0.00..0.56 rows=1 width=4) (actual
time=0.017..0.018 rows=1 loops=3)
 Index Cond: (plugins_guide_address.id =
plugins_plugin_addr.address_id)
 Filter: (plugins_guide_address.city_id = 4535)
 -  Index Scan using core_object_pkey_desc on core_object
(cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028
rows=3 loops=1)
 Total runtime: 0.244 ms
(10 rows)

Second Query, the same, but with LIMIT 4 (slow)
-
explain analyze SELECT core_object.id from core_object INNER JOIN
plugins_plugin_addr ON (core_object.id =
plugins_plugin_addr.oid_id) INNER JOIN plugins_guide_address ON
(plugins_plugin_addr.address_id = plugins_guide_address.id)
WHERE plugins_guide_address.city_id = 4535  ORDER BY
core_object.id DESC LIMIT 4;

 Limit  (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795
rows=4 loops=1)
   -  Merge Join  (cost=0.00..1098182.56 rows=344125 width=4) (actual
time=0.089..4436.791 rows=4 loops=1)
 Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)
 -  Nested Loop  (cost=0.00..972804.02 rows=344125 width=4)
(actual time=0.056..3988.249 rows=4 loops=1)
   -  Index Scan Backward using
plugins_plugin_addr_oid_id on plugins_plugin_addr
(cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..329.942
rows=1244476 loops=1)
   -  Index Scan using plugins_guide_address_pkey on
plugins_guide_address  (cost=0.00..0.56 rows=1 width=4) (actual
time=0.003..0.003 rows=0 loops=1244476)
 Index Cond: (plugins_guide_address.id =
plugins_plugin_addr.address_id)
 Filter: (plugins_guide_address.city_id = 4535)
 -  Index Scan using core_object_pkey_desc on core_object
(cost=0.00..113516.08 rows=3091134 width=4) (actual
time=0.027..284.195 rows=1244479 loops=1)
 Total runtime: 4436.894 ms
(10 rows)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Using high speed swap to improve performance?

2010-04-08 Thread Lew

Christiaan Willemsen wrote:

About a year ago we setup a machine with sixteen 15k disk spindles on
Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris,
we want to move away (we are more familiar with Linux anyway).


Lew no...@lwsc.ehost-services.com wrote:

What evidence do you have that Oracle is closing up Solaris?


Scott Marlowe wrote:

I don't think the other poster mean shutting down solaris, that would
be insane.  I think he meant closing it, as in taking it closed
source, which there is ample evidence for.


Oh, that makes sense.  Yes, it does seem that they're doing that.

Some press hints that Oracle might keep OpenSolaris going, forked from the 
for-pay product.  If that really is true, I speculate that Oracle might be 
emulating the strategy in such things as Apache Geronimo - turn the 
open-source side loose on the world under a license that lets you dip into it 
for code in the closed-source product.  Innovation flows to the closed-source 
product rather than from it.  This empowers products like WebSphere 
Application Server, which includes a lot of reworked Apache code in the 
persistence layer, the web-services stack, the app-server engine and elsewhere.


I don't know Oracle's plans, but that sure would be a good move for them.

For me, I am quite satisfied with Linux.  I don't really know what the value 
proposition is for Solaris anyway.


--
Lew

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Occasional giant spikes in CPU load

2010-04-08 Thread Alan Hodgson
On Wednesday 07 April 2010, Craig James craig_ja...@emolecules.com wrote:
 I thought so too, except that I can't login during the flood.  If the
 CPUs were all doing iowaits, logging in should be easy.

Busying out the drives is about the most reliable way to make logging in 
very slow (especially, but not only, if it's due to swapping).

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How check execution plan of a function

2010-04-08 Thread Sabin Coanda
Maybe other details about the source of the problem help.

The problem occured when I tried to optimize the specified function. It was 
running in about 3 seconds, and I needed to be faster. I make some changes 
and I run the well known CREATE OR REPLACE FUNCTION ... After that, my 
function execution took so much time that I had to cancel it. I restored the 
previous function body, but the problem persisted. I tried to drop it and 
create again, but no chance to restore the original performance. So I was 
forced to restore the database from backup.

Also I found the problem is reproductible, so a change of function will 
damage its performance.

Can anyone explain what is happen ? How can I found the problem ?

TIA,
Sabin 



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] LIMIT causes planner to do Index Scan using a less optimal index

2010-04-08 Thread Joel Jacobson
Hi Xuefeng,

You have misunderstood the problem.

The index used in the query not containing the LIMIT 1 part, is 
index_transactions_accountid_currency, which is indeed a two column index.

The problem is this index is not used when using LIMIT 1.



2010/4/7 sherry.ctr@faa.gov


 Guys,

   Thanks for trying and opening your mind.
   If you want to know how Oracle addressed this issue, here it is:  index
 on two columns.  I remember that they told me in the training postgres has
 no this kind of index, can someone clarify?

 Thanks much!

 Xuefeng Zhu (Sherry)
 Crown Consulting Inc. -- Oracle DBA
 AIM Lab Data Team
 (703) 925-3192



  *Joel Jacobson j...@gluefinance.com*

 04/06/2010 06:30 PM
   To
 Sherry CTR Zhu/AWA/CNTR/f...@faa, pgsql-performance@postgresql.org
 cc
 Robert Haas robertmh...@gmail.com
  Subject
 Re: [PERFORM] LIMIT causes planner to do Index Scan using a less
 optimal index




 Actually, swapping the order of the conditions did in fact make some
 difference, strange.

 I ran the query a couple of times for each variation to see if the
 difference in speed was just a coincidence or a pattern. Looks like the
 speed really is different.

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
   QUERY
 PLAN

 --
  Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1384.401..1384.402
 rows=1 loops=1)
-  Index Scan using transactions_pkey on transactions
  (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1384.399..1384.399
 rows=1 loops=1)
  Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
  Total runtime: 1384.431 ms
 (4 rows)

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
   QUERY
 PLAN

 --
  Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1710.166..1710.167
 rows=1 loops=1)
-  Index Scan using transactions_pkey on transactions
  (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1710.164..1710.164
 rows=1 loops=1)
  Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))
  Total runtime: 1710.200 ms
 (4 rows)

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
   QUERY
 PLAN

 --
  Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1366.526..1366.527
 rows=1 loops=1)
-  Index Scan using transactions_pkey on transactions
  (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1366.525..1366.525
 rows=1 loops=1)
  Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
  Total runtime: 1366.552 ms
 (4 rows)

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
  QUERY
 PLAN

 --
  Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1685.395..1685.396
 rows=1 loops=1)
-  Index Scan using transactions_pkey on transactions
  (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1685.394..1685.394
 rows=1 loops=1)
  Filter: ((currency = 'SEK'::bpchar) AND (accountid = 108))
  Total runtime: 1685.423 ms
 (4 rows)

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((AccountID =
 108) AND (Currency = 'SEK')) ORDER BY TransactionID LIMIT 1;
   QUERY
 PLAN

 --
  Limit  (cost=0.00..116.02 rows=1 width=4) (actual time=1403.904..1403.905
 rows=1 loops=1)
-  Index Scan using transactions_pkey on transactions
  (cost=0.00..1260254.03 rows=10862 width=4) (actual time=1403.903..1403.903
 rows=1 loops=1)
  Filter: ((accountid = 108) AND (currency = 'SEK'::bpchar))
  Total runtime: 1403.931 ms
 (4 rows)

 EXPLAIN ANALYZE SELECT TransactionID FROM Transactions WHERE ((Currency =
 'SEK') AND (AccountID = 108)) ORDER BY TransactionID LIMIT 1;
   

Re: [PERFORM] Query Optimization

2010-04-08 Thread A. Kretschmer
In response to Kaloyan Iliev Iliev :
 Hi,
 Can anyone suggest why this query so slow.


-  Index Scan using 
 ms_commands_history_ms_device_id_idx on ms_commands_history ch  
 (cost=0.00..4.33 rows=1 width=8) (actual time=0.163..25254.004 rows=9807 
 loops=1)

Estimated rows: 1, actual rows: 9807, that's a BIG difference and,
maybe, your problem.

Btw.: your explain is hard to read (line-wrapping). It's better to
attach the explain as an own file...


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How check execution plan of a function

2010-04-08 Thread Andy Colson

On Wed Apr 7 2010 7:47 AM, Sabin Coanda wrote:

Hi there,

I have a function which returns setof record based on a specific query.
I try to check the execution plan of that query, so I write EXPLAIN ANALYZE
before my select, I call the function and I see the result which shows an
actual time about 5 seconds. But when I call my function after I remove
EXPLAIN ANALYZE it takes more than 300 seconds, and I cancel it.

What's happen, or how can I see the function execution plan to optimize it ?

TIA,
Sabin




I ran into the same problems, what I did was enable the logging in 
postgresql.conf.  I dont recall exactly what I enabled, but it was 
something like:


track_functions = pl

log_statement_stats = on
log_duration = on

Then in the serverlog you can see each statement, and how long it took. 
 Once I found a statement that was slow I used explain analyze on just 
it so I could optimize that one statement.


-Andy


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] significant slow down with various LIMIT

2010-04-08 Thread Kevin Grittner
norn andrey.perl...@gmail.com wrote:
 
 I have some mysterious slow downs with ORDER BY and LIMIT. When
 LIMIT getting greater than some value (greater than 3 in my case),
 query takes 4-5 secs instead of 0.25ms. All of the necessary
 indexes are in place. I have no idea what to do, so any advices
 are welcome!
 
Could you show us the output from select version();, describe your
hardware and OS, and show us the contents of your postgresql.conf
file (with all comments removed)?  We can then give more concrete
advice than is possible with the information provided so far.
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Robert Haas
2010/4/8 Merlin Moncure mmonc...@gmail.com:
 previous to 8.2, to get good performance on zabbix you need to
 aggressively vacuum the heavily updated tables yourself.

Generally if you DON'T vacuum aggressively enough, then vacuums will
take a really long and painful amount of time, perhaps accounting for
the hang the OP observed.  There's really no help for it but to
sweat it out once, and then do it frequently enough afterward that it
doesn't become a problem.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Richard Yen
Kind of off-topic, but I've found that putting the history table on a separate 
spindle (using a separate tablespace) also helps improve performance.

--Richard



On Apr 8, 2010, at 12:44 PM, Robert Haas wrote:

 2010/4/8 Merlin Moncure mmonc...@gmail.com:
 previous to 8.2, to get good performance on zabbix you need to
 aggressively vacuum the heavily updated tables yourself.
 
 Generally if you DON'T vacuum aggressively enough, then vacuums will
 take a really long and painful amount of time, perhaps accounting for
 the hang the OP observed.  There's really no help for it but to
 sweat it out once, and then do it frequently enough afterward that it
 doesn't become a problem.
 
 ...Robert
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Greg Smith

Krzysztof Kardas wrote:

My current database variables:
  


That is way too much stuff to sort through.  Try this instead, to only 
get the values you've set to something rather than every single one:


select name,unit,current_setting(name) from pg_settings where 
source='configuration file' ;


Also, a snapshot of output from vmstat 1 during some period when the 
server is performing badly would be very helpful to narrow down what's 
going on.


The easy answer to your question is simply that autovacuum is terrible 
on PG 8.1.  You can tweak it to do better, but that topic isn't covered 
very well in the sort of tuning guides you'll find floating around.  
This is because most of the people who care about this sort of issue 
have simply upgraded to a later version where autovacuum is much better.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 3ware vs. MegaRAID

2010-04-08 Thread Scott Carey

On Apr 7, 2010, at 11:13 PM, Greg Smith wrote:

 Scott Carey wrote:
 * Change the linux 'readahead' block device parameter to at least 4MB (8192, 
 see blockdev --setra) -- I don't know if there is a FreeBSD equivalent.
 
 I haven't tested them, but 3ware gives suggestions at 
 http://www.3ware.com/kb/Article.aspx?id=14852 for tuning their cards 
 properly under FreeBSD.  You cannot get good sequential read performance 
 from 3ware's cards without doing something about this at the OS level; 
 the read-ahead on the card itself is minimal and certainly a bottleneck.
 
 As for your comments about drives being faster at the front than the 
 end, the zcav tool that comes with bonnie++ is a good way to plot that 
 out, rather than having to split partitions up and do a bunch of manual 
 testing.

There's an FIO script that does something similar.
What I'm suggesting is that if you want to test a file system (or compare it to 
others), and you want to get consistent results then run those tests on a 
smaller slice of the drive.   To tune a RAID card, there is not much point 
other than trying out the fast part of the drive, if it can keep up on the fast 
part, it should be able to keep up on the slow part.  I'm would not suggest 
splitting the drive up into chunks and doing many manual tests.

3.5 drives are a bit more than 50% the sequential throughput at the end than 
the start.  2.5 drives are a bit less than 65% the sequential throughput at 
the end than the start.  I haven't seen any significant variation of that rule 
on any benchmark I've run, or I've seen online for 'standard' drives.  
Occasionally there is a drive that doesn't use all its space and is a bit 
faster at the end.

My typical practice is to use the first 70% to 80% of a large volume for the 
main data, and use the slowest last chunk for archives and backups.

 
 -- 
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us
 


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL with Zabbix - problem of newbe

2010-04-08 Thread Greg Smith

Merlin Moncure wrote:

postgresql 8.2: autovacuum enabled by default
postgresql 8.3: HOT (reduces update penalty -- zabbix does a lot of updates)
  


autovacuum wasn't enabled by default until 8.3.  It didn't really work 
all that well out of the box until the support for multiple workers was 
added in that version, along with some tweaking to its default 
parameters.  There's also a lot more logging information available, both 
the server logs and the statistics tables, to watch what it's doing that 
were added in 8.3.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance