lto:[EMAIL PROTECTED]
Sent: Tuesday, July 26, 2005 7:12 AM
To: Chris Isaacson
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] COPY insert performance
Hi Chris,
Have you considered breaking the data into multiple chunks and COPYing
each concurrently?
Also, have you ensured that
shared_buffers would take care of this. I'll
increase work_mem to 512MB and rerun my test. I have 1G of RAM, which
is less than we'll be running in production (likely 2G).
-Chris
-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED]
Sent: Monday, July 25, 2005 6
ork_mem to 512MB and rerun my test. I have 1G of RAM, which
is less than we'll be running in production (likely 2G).
-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED]
Sent: Monday, July 25, 2005 6:09 PM
To: Chris Isaacson; Postgresql Performance
Subject: Re: [PERFORM
Title: Message
I need COPY via
libpqxx to insert millions of rows into two tables. One table has roughly
have as many rows and requires half the storage. In production, the
largest table will grow by ~30M rows/day. To test the COPY performance I
split my transactions into 10,000 rows. I