Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-13 Thread Mark Kirkwood
Tom Lane wrote:
Mark Kirkwood [EMAIL PROTECTED] writes:
the costs of paths using these indexes are
quite similar, so are quite sensitive to (some) parameter values.

They'll be exactly the same, actually, as long as the thing predicts
exactly one row retrieved.  So it's quasi-random which plan you get.
btcostestimate needs to be improved to understand that in multicolumn
index searches with inequality conditions, we may have to scan through
tuples that don't meet all the qualifications.  It's not accounting for
that cost at the moment, which is why the estimates are the same.
I see some small differences in the numbers - I am thinking that these
are due to the calculations etc in cost_index(). e.g:
create_index_paths : index oid 12616389 (test_id2)
cost_index : cost=2.839112 (startup_cost=0.00 run_cost=2.839112)
   : tuples=1.00 cpu_per_tuple=0.017500
   : selectivity=0.02
   : run_index_tot_cost=2.003500 run_io_cost=0.818112)
create_index_paths : index oid 12616388 (test_id1)
cost_index : cost=2.933462 (startup_cost=0.002500 run_cost=2.930962)
   : tuples=1.00 cpu_per_tuple=0.01
   : selectivity=0.02
   : run_index_tot_cost=2.008500 run_io_cost=0.912462
Where:
run_index_tot_cost=indexTotalCost - indexStartupCost;
run_io_cost=max_IO_cost + csquared * (min_IO_cost - max_IO_cost)
selectivity=indexSelectivity
Hmmm ... so it's only the selectivity that is the same (sourced from
index-amcostestimate which I am guessing points to btcostestimate), is
that correct?
cheers
Mark

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-13 Thread Tom Lane
Mark Kirkwood [EMAIL PROTECTED] writes:
 Hmmm ... so it's only the selectivity that is the same (sourced from
 index-amcostestimate which I am guessing points to btcostestimate), is
 that correct?

No, the point is that btcostestimate will compute not only the same
selectivities but the identical index access cost values, because it
thinks that only one index entry will be fetched in both cases.  It
needs to account for the fact that the inequality condition will cause a
scan over a larger range of the index than is actually returned.  See
_bt_preprocess_keys() and _bt_checkkeys().

The small differences you are showing have to do with different
assumptions about where the now() function will get evaluated (once per
row or once at scan start).  That's not the effect that I'm worried
about.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Litao Wu
Hi All,

Here is my test comparison between Postgres (7.3.2)
optimizer vs Oracle (10g) optimizer. 

It seems to me that Postgres optimizer is not smart 
enough.

Did I miss anything?

Thanks,

In Postgres:

drop table test;
create table test (
 modulecharacter varying(50),
 action_deny   integer,
 created   timestamp with time zone,
 customer_id   integer,
 domaincharacter varying(255));
create or replace function insert_rows () returns
integer as '
BEGIN
   for i in 1 .. 50 loop
 insert into test values (i, 2, now(), 100,  i);
   end loop;
   return 1;
END;
' LANGUAGE 'plpgsql';

select insert_rows();

create index test_id1 on test (customer_id, created,
domain);

analyze test;

explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created = ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;

  
  QUERY PLAN  
 

 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=91.05..91.05 rows=1 loops=1)
   -  Group  (cost=3.12..3.12 rows=1 width=9) (actual
time=91.04..91.04 rows=1 loops=1)
 -  Sort  (cost=3.12..3.12 rows=1 width=9)
(actual time=91.03..91.03 rows=1 loops=1)
   Sort Key: module
   -  Index Scan using test_id1 on test 
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.03..91.00 rows=1 loops=1)
 Index Cond: ((customer_id = 100)
AND (created = '2005-01-11
14:48:44.832552-07'::timestamp with time zone) AND
(domain = '100'::character varying))
 Total runtime: 91.13 msec
(7 rows)

create index test_id2 on test(domain);
analyze test;

explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created = ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;

  
  QUERY PLAN  
 

 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=90.30..90.30 rows=1 loops=1)
   -  Group  (cost=3.12..3.12 rows=1 width=9) (actual
time=90.29..90.30 rows=1 loops=1)
 -  Sort  (cost=3.12..3.12 rows=1 width=9)
(actual time=90.29..90.29 rows=1 loops=1)
   Sort Key: module
   -  Index Scan using test_id1 on test 
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.03..90.25 rows=1 loops=1)
 Index Cond: ((customer_id = 100)
AND (created = '2005-01-11
14:51:09.555974-07'::timestamp with time zone) AND
(domain = '100'::character varying))
 Total runtime: 90.38 msec
(7 rows)

WHY PG STILL CHOOSE INDEX test_id1???
^
BECAUSE QUERY WILL RUN MUCH FASTER USING test_id2!!!


drop index test_id1;
explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created = ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;
  
   QUERY PLAN 
 
---
 Aggregate  (cost=3.12..3.13 rows=1 width=9) (actual
time=0.08..0.08 rows=1 loops=1)
   -  Group  (cost=3.12..3.13 rows=1 width=9) (actual
time=0.08..0.08 rows=1 loops=1)
 -  Sort  (cost=3.12..3.13 rows=1 width=9)
(actual time=0.07..0.07 rows=1 loops=1)
   Sort Key: module
   -  Index Scan using test_id2 on test 
(cost=0.00..3.11 rows=1 width=9) (actual
time=0.04..0.05 rows=1 loops=1)
 Index Cond: (domain =
'100'::character varying)
 Filter: ((created = '2005-01-11
14:53:58.806364-07'::timestamp with time zone) AND
(customer_id = 100))
 Total runtime: 0.14 msec
(8 rows)

In Oracle:
==
drop table test;
create table test (
 modulecharacter varying(50),
 action_deny   integer,
 created   timestamp with time zone,
 customer_id   integer,
 domaincharacter varying(255));

begin
   for i in 1..50 loop
 insert into test values (i, 2, current_timestamp,
100, i);
   end loop;
end;
/

create index test_id1 on test (customer_id, created,
domain);

analyze table test compute statistics;

set autot on
set timing on

SELECT module,  sum(action_deny)
FROM test
WHERE  created = (current_timestamp - interval '1'
day) AND customer_id=100
  AND  domain='100'
GROUP BY module
/

MODULE
SUM(ACTION_DENY)

Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Mike Mascari
Litao Wu wrote:
Hi All,
Here is my test comparison between Postgres (7.3.2)
optimizer vs Oracle (10g) optimizer. 

It seems to me that Postgres optimizer is not smart 
enough.

Did I miss anything?
Yeah, 7.4.
7.3.2 is *ancient*. Here's output from 7.4:
[EMAIL PROTECTED] explain analyze
test-# SELECT module,  sum(action_deny)
test-# FROM test
test-# WHERE  created = ('now'::timestamptz - '1
test'# day'::interval) AND customer_id='100'
test-#   AND  domain='100'
test-# GROUP BY module;
 QUERY PLAN 

---
 HashAggregate  (cost=5.69..5.69 rows=1 width=13) (actual 
time=715.058..715.060 rows=1 loops=1)
   -  Index Scan using test_id1 on test  (cost=0.00..5.68 rows=1 
width=13) (actual time=0.688..690.459 rows=1 loops=1)
 Index Cond: ((customer_id = 100) AND (created = '2005-01-11 
17:52:22.364145-05'::timestamp with time zone) AND ((domain)::text = 
'100'::text))
 Total runtime: 717.546 ms
(4 rows)

[EMAIL PROTECTED] create index test_id2 on test(domain);
CREATE INDEX
[EMAIL PROTECTED] analyze test;
ANALYZE
[EMAIL PROTECTED]
[EMAIL PROTECTED] explain analyze
test-# SELECT module,  sum(action_deny)
test-# FROM test
test-# WHERE  created = ('now'::timestamptz - '1
test'# day'::interval) AND customer_id='100'
test-#   AND  domain='100'
test-# GROUP BY module;
   QUERY PLAN 


 HashAggregate  (cost=5.68..5.69 rows=1 width=13) (actual 
time=10.778..10.780 rows=1 loops=1)
   -  Index Scan using test_id2 on test  (cost=0.00..5.68 rows=1 
width=13) (actual time=10.702..10.721 rows=1 loops=1)
 Index Cond: ((domain)::text = '100'::text)
 Filter: ((created = '2005-01-11 
17:53:16.720749-05'::timestamp with time zone) AND (customer_id = 100))
 Total runtime: 11.039 ms
(5 rows)

[EMAIL PROTECTED] select version();
 PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.4.0 20040204 (prerelease)
(1 row)

Hope that helps,
Mike Mascari
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Mark Kirkwood
Litao  Wu Wrote:
explain analyze
SELECT module,  sum(action_deny)
FROM test
WHERE  created = ('now'::timestamptz - '1
day'::interval) AND customer_id='100'
  AND  domain='100'
GROUP BY module;
Here is my output for this query:
  QUERY PLAN

 HashAggregate  (cost=3.03..3.03 rows=1 width=13) (actual
time=0.132..0.135 rows=1 loops=1)
   -  Index Scan using test_id2 on test  (cost=0.00..3.02 rows=1
width=13) (actual time=0.085..0.096 rows=1 loops=1)
 Index Cond: ((domain)::text = '100'::text)
 Filter: ((created = ('2005-01-13
11:57:34.673833+13'::timestamp with time zone - '1 day'::interval)) AND
(customer_id = 100))
 Total runtime: 0.337 ms
(5 rows)
Time: 8.424 ms
The version is:
PostgreSQL 8.0.0rc5 on i386-unknown-freebsd5.3, compiled by GCC gcc
(GCC) 3.4.2 [FreeBSD] 20040728
I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to
the default (4) results in a plan using test_id1. A little
experimentation showed that for my system random_page_cost=1 was where
it changed from using test_id1 to test_id2.
So changing this parameter may be helpful.
I happen to have some debugging code enabled for the optimizer, and the
issue appears to be that the costs of paths using these indexes are
quite similar, so are quite sensitive to (some) parameter values.
regards
Mark
P.s : 7.3.2 is quite old.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Ragnar HafstaĆ°
On Thu, 2005-01-13 at 12:14 +1300, Mark Kirkwood wrote:

[snip some explains]

 
 I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to
 the default (4) results in a plan using test_id1.

it is not rational to have random_page_cost  1.

if you see improvement with such a setting, it is as likely that 
something else is wrong, such as higher statistic targets needed,
or a much too low effective_cache setting. 

gnari



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Mark Kirkwood
Ragnar HafstaĆ° wrote:

it is not rational to have random_page_cost  1.
I agree, in theory one should never *need* to set it  1. However in
cases when the optimizers understanding of things is a little off,
compensation may be required to achieve better plans (e.g. encouraging
index scans on data with funny distributions or collelations).
if you see improvement with such a setting, it is as likely that 
something else is wrong, such as higher statistic targets needed,
or a much too low effective_cache setting. 

Altho this is good advice, it is not always sufficient. For instance I
have my effective_cache_size=2. Now the machine has 512Mb ram and
right now cache+buf+free is about 100M, and shared_buffers=2000. So in
fact I probably have it a bit high :-).
Increasing stats target will either make the situation better or worse -
a better sample of data is obtained for analysis, but this is not
*guaranteed* to lead to a faster execution plan, even if in
general/usually it does.
cheers
Mark
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Postgres Optimizer is not smart enough?

2005-01-12 Thread Tom Lane
Mark Kirkwood [EMAIL PROTECTED] writes:
 I happen to have some debugging code enabled for the optimizer, and the
 issue appears to be that the costs of paths using these indexes are
 quite similar, so are quite sensitive to (some) parameter values.

They'll be exactly the same, actually, as long as the thing predicts
exactly one row retrieved.  So it's quasi-random which plan you get.

btcostestimate needs to be improved to understand that in multicolumn
index searches with inequality conditions, we may have to scan through
tuples that don't meet all the qualifications.  It's not accounting for
that cost at the moment, which is why the estimates are the same.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])