On Tuesday 16 December 2003 17:06, Nick Fankhauser - Doxpop wrote:
Hi-
I'm trying to optimize a query that I *think* should run very fast.
Essentially, I'm joining two tables that have very selective indexes and
constraining the query on an indexed field. (There's a third small lookup
table
It seems that your basic problem is that you're fetching lots of rows
from two big ol' tables.
It doesn't seem to me that there would be a substantially better plan
for this query with your tables as they stand.
That's more or less the conclusion I had come to. I was just hoping someone
else
On Dec 17, 2003, at 11:57 AM, Nick Fankhauser wrote:
Hi-
After having done my best to squeeze better performance out of our
application by tuning within our existing resources, I'm falling back
on
adding memory as a short-term solution while we get creative for a
long-term
fix. I'm curious
If you have 3 1.5GB tables then you might as well go for 4GB while you're at
it. Make sure you've got a bigmem kernel either running or available, and
boost effective_cache_size by whatever amount you increase the RAM by. We
run a Quad Xeon/4GB server on Redhat 7.3 and it's solid as a rock.
Dennis, Shridhar, and Neil,
Thanks for your input. Here are my responses:
I ran VACUUM FULL on the table in question. Although that did reduce Pages
and UnUsed, the SELECT * query is still much slower on this installation
than in the new, restored one.
Old server:
# VACUUM FULL abc;
On Thursday 18 December 2003 09:24, David Shadovitz wrote:
Old server:
# VACUUM FULL abc;
VACUUM
# VACUUM VERBOSE abc;
NOTICE: --Relation abc--
NOTICE: Pages 1526: Changed 0, Empty 0; Tup 91528; Vac 0, Keep 0, UnUsed
32. Total CPU 0.07s/0.52u sec elapsed 0.60 sec.
VACUUM
New