6000 inserts, each in its own transaction, will be very long.
Group your inserts in one transaction and it'll be faster (maybe 1-2 minutes).
Have your program generate a tab-delimited text file and load it with COPY, you should be down to a few seconds.



On Tue, 21 Sep 2004 13:27:43 +0200, Alain Reymond <[EMAIL PROTECTED]> wrote:


Good afternoon,

I created a database with Postgres 7.3.4 under Linux RedHat 7.3 on a
Dell PowerEdge server.

One of the table is
resultats(numbil, numpara, mesure, deviation)
with an index on numbil.

Each select on numbil returns up to 60 rows (that means 60 rows for
one numbil with 60 different numpara) for example
(200000,1,500,3.5)
(200000,2,852,4.2)
(200000,12,325,2.8)
(200001,1,750,1.5)
(200001,2,325,-1.5)
(200001,8,328,1.2)
etc..

This table contains now more than 6.500.000 rows and grows from
6000 rows a day. I have approximatively 1.250.000 rows a year. So I
have 5 years of data online.
Now, an insertion of 6000 lasts very loooong, up to one hour...
I tried to insert 100.000 yesterday evening and it was not done in 8
hours.

Do you have any idea how I can improve speed - apart from splitting
the table every 2 or 3 years which is the the aim of a database!

I thank you for your suggestions.

Regards.

Alain Reymond
CEIA
Bd Saint-Michel 119
1040 Bruxelles
Tel: +32 2 736 04 58
Fax: +32 2 736 58 02
[EMAIL PROTECTED]
PGP key sur http://pgpkeys.mit.edu:11371



---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend




---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend

Reply via email to