I recently migrated from MySql, The database size in mysql was 1.4GB (It is a 
static database). It generated a dump file (.sql) of size 8GB), It took 2days 
to import the whole thing into postgres. After all the response from postgres 
is a disaster. It took 40sec's to run a select count(logrecno) from sf10001; 
which generated a value 197569. And It took for ever time to display the table. 
How to optimize the database so that I can expect faster access to data.
 
each table has 70 colsX197569 rows (static data), like that I have 40 tables,  
Everything static.
 
system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
 
thanks a million in advance,
shashi.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to