Rajesh Kumar Mallah wrote:


The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.

If dropping/recreating the table improves things, then we can reasonably assume that the table is pretty active with updates/inserts. Correct?

The  problem is that such phenomenon obscures our
judgement  used in optimising queries and database.

Lots of phenomenon obscure that ...

If a query runs slow we really cant tell if its a problem
with query itself , hardware or dead rows.

I already did vacumm full on the table but it still did not
have that effect on performance.
In fact the last figures were after doing a vacuum full.

If the data gets too fragmented, a vacuum may not be enough. Also, read up on the recommendations _against_ vacuum full (recommending only using vacuum on databases) With full, vacuum condenses the database, which may actually hurt performance. A regular vacuum just fixes things up, and may leave unused space lying around. However, this should apparently achieve a balance between usage and vacuum. See the docs, they are much better at describing this than I am.

Can there be any more elegent solution to this problem.

As a guess, look into CLUSTER (a Postgres SQL command). CLUSTER will basically recreate the table while ordering rows based on an index. (this might benefit you in other ways as well) Don't forget to analyze after cluster. If the problem is caused by frequent updates/inserts, you may find that re-clustering the table on a certain schedule is worthwhile.

Be warned, this suggestion is based on an educated guess, I make no
guarantees that it will help your problem.  Read the docs on cluster
and come to your own conclusions.

Regds Mallah.

Richard Huxton wrote:

On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:

The problem is that i want to know if i need a Hardware upgrade
at the moment.

Eg i have another table rfis which contains ~ .6 million records.

SELECT count(*) from rfis where sender_uid > 0;

Time: 117560.635 ms

Which is approximate 4804 records per second. Is it an acceptable
performance  on the hardware below:

DISKS: ultra160 , 10 K , 18 GB
Processor: 2* 2.0 Ghz Xeon

Hmm - doesn't seem good, does it? If you run it again, is it much faster (since the data should be cached then)? What does "vmstat 10" show while you're running the query?

One thing you should have done is read the performance tuning guide at:
The default values are very conservative, and you will need to change them.

What kind of upgrades shoud be put on the server for it to become
reasonable fast.
If you've only got one disk, then a second disk for OS/logging. Difficult to say more without knowing numbers of users/activity etc.

-- Bill Moran Potential Technologies http://www.potentialtech.com

---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

Reply via email to