Rodrigo,
> 3. My transaction log configuration are : checkpoint_segments = 3 and
> checkpoint_timeout = 300 and my transaction logs are on the same disk .
Well, you need to move your transaction logs to another disk, and increase
them to a large number ... like 128, which is about 1GB (you'll n
ged ?
HTH
Greg WIlliamson
DBA
GlobeXplorer LLC
-Original Message-
From: Rodrigo Carvalhaes [mailto:[EMAIL PROTECTED]
Sent: Sun 12/5/2004 11:52 AM
To: Christopher Browne
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Improve BULK insertion
Hi!
1. I am doing the inserts using
Hi!
1. I am doing the inserts using pg_restore. The dump was created using
pg_dump and the standard format (copy statements)
2. See below the table schema. There are only 7 indexes.
3. My transaction log configuration are : checkpoint_segments = 3 and
checkpoint_timeout = 300 and my transactio
Rodrigo,
> I need to insert 500.000 records on a table frequently. It´s a bulk
> insertion from my applicatoin.
> I am with a very poor performance. PostgreSQL insert very fast until the
> tuple 200.000 and after it the insertion starts to be really slow.
> I am seeing on the log and there is a lo
I do mass inserts daily into PG. I drop the all indexes except my primary key and then use the COPY FROM command. This usually takes less than 30 seconds. I spend more time waiting for indexes to recreate.Patrick HatcherMacys.Com [EMAIL PROTECTED] wrote: -To: [EMAIL PROTECTED]From: Christoph
In the last exciting episode, [EMAIL PROTECTED] (Grupos) wrote:
> Hi !
>
> I need to insert 500.000 records on a table frequently. It´s a bulk
> insertion from my applicatoin.
> I am with a very poor performance. PostgreSQL insert very fast until
> the tuple 200.000 and after it the insertion start