On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[EMAIL PROTECTED]>
wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
groupid bigint references g(id),
id bigint,
unique(id, groupid));
analyze g;
analyze a;
select count(*) from a
294
select count(*) from g
320
explain analyze
select *
from g
join a on a.groupid = g.id
where g.isgroup
Hash Join (cost=5.35..11.50 rows=11 width=25) (actual time=0.261..1.755
rows=294 loops=1)
Hash Cond: (a.groupid = g.id)
-> Seq Scan on a (cost=0.00..4.94 rows=294 width=16) (actual
time=0.047..0.482 rows=294 loops=1)
-> Hash (cost=5.20..5.20 rows=12 width=9) (actual time=0.164..0.164
rows=12 loops=1)
-> Seq Scan on g (cost=0.00..5.20 rows=12 width=9) (actual
time=0.042..0.136 rows=12 loops=1)
Filter: isgroup
Total runtime: 2.225 ms
You should really put an EXPLAIN ANALYZE of your big query.
This little query plan seems OK to me.
Two very small tables, ok, hash'em, it's the best.
Now, of course if it is repeated for every row in your JOIN, you have a
problem.
The question is, why is it repeated for every row ?
This cannot be answered without seeing the whole query.
Another question would be, is there a way to structure the tables
differently ?
Again, this cannot be answered without seeing the whole query, and some
explanation about what the data & fields mean.
Please provide more information...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance