> On Mon, 20 Jan 2003 16:42:12 +0100 (CET), "Moritz Lennert"
> <[EMAIL PROTECTED]> wrote:
>>explain analyze select commune_residence, type_diplome from rec81 where
>>type_diplome = '11';
>>NOTICE:  QUERY PLAN:
>>
>>Seq Scan on rec81  (cost=0.00..120316.30 rows=177698 width=15) (actual
>>time=23.03..219164.82 rows=176621 loops=1)
>>Total runtime: 226149.03 msec
>
> Assumung that there are no NULLs, I guesstimate that 25 tuples should
> fit onto one page (could be more than 40 if you use smallint and
> "char").  That gives 88K pages for 2.2M tuples.  However,
> cost=0.00..120316.30  tells us that there are almost 120K pages.  Time
> to do a VACUUM FULL?
>

I'll try that, although I haven't changed any of the tuples since import
of the data (this is a static table...)

> From what I've seen I think that the planner is right to choose a seq
> scan.  226 seconds for reading 120K pages (~ 1GB) is not very
> impressive, though.  What kind of disk do you have?

IDE, Samsung, 7200rpm

>  Is your disk heavily fragmented?

It shouldn't be.

> Did you enable DMA?

No, should I ?

>  What else was running on
> your machine while you did that VACUUM ANALYZE?

Mozilla, maybe xterm with vi...

>>shared_buffers = 128
>
> It won't help much for this query we are discussing, but I would
> recommend setting shared_buffers to something in the range [1000,
> 4000].
>
> And one of my favorites: effective_cache_size = 40000
>

I will have to increase /proc/sys/kernel/shmmax for that, or ?

Thanks again !

Moritz

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to