Re: [PERFORM] performance tuning queries

2008-11-27 Thread PFC



First off, any thoughts per tuning inserts into large tables. I have a  
large

table with an insert like this:

insert into public.bigtab1 (text_col1, text_col2, id) values ...

QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
(1 row)

The query cost is low but this is one of the slowest statements per  
pgfouine


Possible Causes of slow inserts :

- slow triggers ?
- slow foreign key checks ? (missing index on referenced table ?)
- functional index on a slow function ?
- crummy hardware (5 MB/s RAID cards, etc)
- too many indexes ?


Next we have a select count(*) that  also one of the top offenders:

select count(*) from public.tab3  where user_id=31
and state='A'
and amount0;

 QUERY PLAN
-
 Aggregate  (cost=3836.53..3836.54 rows=1 width=0)
   -  Index Scan using order_user_indx ontab3 user_id   
(cost=0.00..3834.29

rows=897 width=0)
 Index Cond: (idx_user_id = 31406948::numeric)
 Filter: ((state = 'A'::bpchar) AND (amount  0::numeric))
(4 rows)

We have an index on the user_id but not on the state or amount,

add index to amount ?


Can we see EXPLAIN ANALYZE ?

	In this case the ideal index would be multicolumn (user_id, state) or  
(user_id,amount) or (user_id,state,amount) but choosing between the 3  
depends on your data...


You could do :

SELECT count(*), state, amount0  FROM public.tab3  where user_id=31 GROUP  
BY state, amount0;


And post the results.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance tuning queries

2008-11-27 Thread A. Kretschmer
am  Wed, dem 26.11.2008, um 21:21:04 -0700 mailte Kevin Kempter folgendes:
 Next we have a select count(*) that  also one of the top offenders:
 
 select count(*) from public.tab3  where user_id=31 
 and state='A' 
 and amount0;
 
  QUERY PLAN   

 -
  Aggregate  (cost=3836.53..3836.54 rows=1 width=0)
-  Index Scan using order_user_indx ontab3 user_id  (cost=0.00..3834.29 
 rows=897 width=0)
  Index Cond: (idx_user_id = 31406948::numeric)
  Filter: ((state = 'A'::bpchar) AND (amount  0::numeric))
 (4 rows)
 
 We have an index on the user_id but not on the state or amount, 
 
 add index to amount ?

Depends.

- Is the index on user_id a unique index?
- how many different values are in the table for state, i.e., maybe an
  index on state can help
- how many rows in the table with amount  0? If almost all rows
  contains an amount  0 an index can't help in this case


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] performance tuning queries

2008-11-27 Thread Mario Weilguni

Kevin Kempter schrieb:

Hi All;

I'm looking for tips / ideas per performance tuning some specific queries. 
These are generally large tables on a highly active OLTP system 
(100,000 - 200,000 plus queries per day)


First off, any thoughts per tuning inserts into large tables. I have a large 
table with an insert like this:


insert into public.bigtab1 (text_col1, text_col2, id) values ...

QUERY PLAN
--

 Result  (cost=0.00..0.01 rows=1 width=0)
(1 row)

The query cost is low but this is one of the slowest statements per pgfouine
  
Do you insert multiple values in one transaction, or one transaction per 
insert?



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] performance tuning queries

2008-11-26 Thread Kevin Kempter
Hi All;

I'm looking for tips / ideas per performance tuning some specific queries. 
These are generally large tables on a highly active OLTP system 
(100,000 - 200,000 plus queries per day)

First off, any thoughts per tuning inserts into large tables. I have a large 
table with an insert like this:

insert into public.bigtab1 (text_col1, text_col2, id) values ...

QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
(1 row)

The query cost is low but this is one of the slowest statements per pgfouine







Next we have a select count(*) that  also one of the top offenders:

select count(*) from public.tab3  where user_id=31 
and state='A' 
and amount0;

 QUERY PLAN 
 
-
 Aggregate  (cost=3836.53..3836.54 rows=1 width=0)
   -  Index Scan using order_user_indx ontab3 user_id  (cost=0.00..3834.29 
rows=897 width=0)
 Index Cond: (idx_user_id = 31406948::numeric)
 Filter: ((state = 'A'::bpchar) AND (amount  0::numeric))
(4 rows)

We have an index on the user_id but not on the state or amount, 

add index to amount ?



Thoughts ?







-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance