Re: [PERFORM] COPY insert performance

2005-07-26 Thread Gavin Sherry
Hi Chris,

Have you considered breaking the data into multiple chunks and COPYing
each concurrently?

Also, have you ensured that your table isn't storing OIDs?

On Mon, 25 Jul 2005, Chris Isaacson wrote:

 #---
 
 # RESOURCE USAGE (except WAL)
 #---
 
 shared_buffers = 65536  # min 16, at least max_connections*2, 8KB each

shared_buffers that high has been shown to affect performance. Try 12000.

 wal_buffers = 64  # min 4, 8KB each

Increasing wal_buffers can also have an effect on performance.

Thanks,

Gavin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] COPY insert performance

2005-07-26 Thread Chris Isaacson
I do not have any foreign keys and I need the indexes on during the
insert/copy b/c in production a few queries heavily dependent on the
indexes will be issued.  These queries will be infrequent, but must be
fast when issued.

I am using InnoDB with MySQL which appears to enforce true transaction
support.  (http://dev.mysql.com/doc/mysql/en/innodb-overview.html)  If
not, how is InnoDB cheating?

Sorry for the confusion, but pg_xlog is currently on a dedicated drive
(10K SCSI, see below).  Would I realize further gains if I had a third
drive and put the indexes on that drive?  

I've played with the checkpoint_segments.  I noticed an enormous
improvement increasing from the default to 40, but neglible improvement
thereafter.  Do you have a recommendation for a value?

My bg_writer adjustments were a last ditch effort.  I found your advice
correct and realized no gain.  I have not tested under a querying load
which is a good next step.  I had not thought of the comparative
degradation of MySQL vs. PostgreSQL.

Thanks for the tip on the RAM usage by indexes.  I was under the
incorrect assumption that shared_buffers would take care of this.  I'll
increase work_mem to 512MB and rerun my test.  I have 1G of RAM, which
is less than we'll be running in production (likely 2G).

-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 6:09 PM
To: Chris Isaacson; Postgresql Performance
Subject: Re: [PERFORM] COPY insert performance


Chris Isaacson wrote:
 I need COPY via libpqxx to insert millions of rows into two tables.  
 One table has roughly have as many rows and requires half the storage.

 In production, the largest table will grow by ~30M rows/day.  To test 
 the COPY performance I split my transactions into 10,000 rows.  I 
 insert roughly 5000 rows into table A for every 10,000 rows into table

 B.

 Table A has one unique index:

 order_main_pk UNIQUE, btree (cl_ord_id)

 Table B has 1 unique index and 2 non-unique indexes:

 order_transition_pk UNIQUE, btree (collating_seq) 
 order_transition_ak2 btree (orig_cl_ord_id) order_transition_ak3 
 btree (exec_id)

Do you have any foreign key references?
If you are creating a table for the first time (or loading a large
fraction of the data), it is common to drop the indexes and foreign keys
first, and then insert/copy, and then drop them again.

Is InnoDB the backend with referential integrity, and true transaction
support? I believe the default backend does not support either (so it is
cheating to give you speed, which may be just fine for your needs,
especially since you are willing to run fsync=false).

I think moving pg_xlog to a dedicated drive (set of drives) could help
your performance. As well as increasing checkpoint_segments.

I don't know if you gain much by changing the bg_writer settings, if you
are streaming everything in at once, you probably want to have it
written out right away. My understanding is that bg_writer settings are
for the case where you have mixed read and writes going on at the same
time, and you want to make sure that the reads have time to execute (ie
the writes are not saturating your IO).

Also, is any of this tested under load? Having a separate process issue
queries while you are loading in data. Traditionally MySQL is faster
with a single process inserting/querying for data, but once you have
multiple processes hitting it at the same time, it's performance
degrades much faster than postgres.

You also seem to be giving MySQL 512M of ram to work with, while only
giving 2M/200M to postgres. (re)creating indexes uses
maintenance_work_mem, but updating indexes could easily use work_mem.
You may be RAM starved.

John
=:-



 My testing environment is as follows:
 -Postgresql 8.0.1
 -libpqxx 2.5.0
 -Linux 2.6.11.4-21.7-smp x86_64
 -Dual Opteron 246
 -System disk (postgres data resides on this SCSI disk) -  Seagate
 (ST373453LC) - 15K, 73 GB
 (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081
 ,549,00.html)
 -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside
 on this disk
 -NO RAID

 *PostgreSQL*
 Here are the results of copying in 10M rows as fast as possible:
 (10K/transaction)
 Total Time:1129.556 s
 Rows/sec: 9899.922
 Transaction1.2s225
 Transaction1.5s 77
 Transaction2.0s  4
 Max Transaction   2.325s

 **MySQL**
 **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced 
 these
 results: (I used MySQL's INSERT INTO x VALUES
 (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)
 Total Time: 860.000 s
 Rows/sec:11627.91
 Transaction1.2s  0
 Transaction1.5s  0
 Transaction2.0s  0
 Max Transaction   1.175s

 Considering the configurations shown below, can anyone offer advice to

 close the 15% gap and the much worse variability I'm experiencing.  
 Thanks

 My *postgresql.conf* has the following non-default values:
 # -
 # PostgreSQL

Re: [PERFORM] COPY insert performance

2005-07-26 Thread Chris Isaacson
John,

(FYI: got a failed to deliver to [EMAIL PROTECTED])

I do not have any foreign keys and I need the indexes on during the
insert/copy b/c in production a few queries heavily dependent on the
indexes will be issued.  These queries will be infrequent, but must be
fast when issued.

I am using InnoDB with MySQL which appears to enforce true transaction
support.  (http://dev.mysql.com/doc/mysql/en/innodb-overview.html)  If
not, how is InnoDB cheating?

Sorry for the confusion, but pg_xlog is currently on a dedicated drive
(10K SCSI, see below).  Would I realize further gains if I had a third
drive and put the indexes on that drive? =20

I've played with the checkpoint_segments.  I noticed an enormous
improvement increasing from the default to 40, but neglible improvement
thereafter.  Do you have a recommendation for a value?

My bg_writer adjustments were a last ditch effort.  I found your advice
correct and realized no gain.  I have not tested under a querying load
which is a good next step.  I had not thought of the comparative
degradation of MySQL vs. PostgreSQL.

Thanks for the tip on the RAM usage by indexes.  I was under the
incorrect assumption that shared_buffers would take care of this.  I'll
increase work_mem to 512MB and rerun my test.  I have 1G of RAM, which
is less than we'll be running in production (likely 2G).

-Chris

-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 25, 2005 6:09 PM
To: Chris Isaacson; Postgresql Performance
Subject: Re: [PERFORM] COPY insert performance


Chris Isaacson wrote:
 I need COPY via libpqxx to insert millions of rows into two tables.  
 One table has roughly have as many rows and requires half the storage.

 In production, the largest table will grow by ~30M rows/day.  To test 
 the COPY performance I split my transactions into 10,000 rows.  I 
 insert roughly 5000 rows into table A for every 10,000 rows into table

 B.

 Table A has one unique index:

 order_main_pk UNIQUE, btree (cl_ord_id)

 Table B has 1 unique index and 2 non-unique indexes:

 order_transition_pk UNIQUE, btree (collating_seq) 
 order_transition_ak2 btree (orig_cl_ord_id) order_transition_ak3 
 btree (exec_id)

Do you have any foreign key references?
If you are creating a table for the first time (or loading a large
fraction of the data), it is common to drop the indexes and foreign keys
first, and then insert/copy, and then drop them again.

Is InnoDB the backend with referential integrity, and true transaction
support? I believe the default backend does not support either (so it is
cheating to give you speed, which may be just fine for your needs,
especially since you are willing to run fsync=false).

I think moving pg_xlog to a dedicated drive (set of drives) could help
your performance. As well as increasing checkpoint_segments.

I don't know if you gain much by changing the bg_writer settings, if you
are streaming everything in at once, you probably want to have it
written out right away. My understanding is that bg_writer settings are
for the case where you have mixed read and writes going on at the same
time, and you want to make sure that the reads have time to execute (ie
the writes are not saturating your IO).

Also, is any of this tested under load? Having a separate process issue
queries while you are loading in data. Traditionally MySQL is faster
with a single process inserting/querying for data, but once you have
multiple processes hitting it at the same time, it's performance
degrades much faster than postgres.

You also seem to be giving MySQL 512M of ram to work with, while only
giving 2M/200M to postgres. (re)creating indexes uses
maintenance_work_mem, but updating indexes could easily use work_mem.
You may be RAM starved.

John
=:-



 My testing environment is as follows:
 -Postgresql 8.0.1
 -libpqxx 2.5.0
 -Linux 2.6.11.4-21.7-smp x86_64
 -Dual Opteron 246
 -System disk (postgres data resides on this SCSI disk) -  Seagate
 (ST373453LC) - 15K, 73 GB
 (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081
 ,549,00.html)
 -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside
 on this disk
 -NO RAID

 *PostgreSQL*
 Here are the results of copying in 10M rows as fast as possible:
 (10K/transaction)
 Total Time:1129.556 s
 Rows/sec: 9899.922
 Transaction1.2s225
 Transaction1.5s 77
 Transaction2.0s  4
 Max Transaction   2.325s

 **MySQL**
 **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced 
 these
 results: (I used MySQL's INSERT INTO x VALUES
 (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)
 Total Time: 860.000 s
 Rows/sec:11627.91
 Transaction1.2s  0
 Transaction1.5s  0
 Transaction2.0s  0
 Max Transaction   1.175s

 Considering the configurations shown below, can anyone offer advice to

 close the 15% gap and the much worse variability I'm experiencing.  
 Thanks

 My *postgresql.conf* has the following

Re: [PERFORM] COPY insert performance

2005-07-26 Thread Chris Isaacson
I need the chunks for each table COPYed within the same transaction
which is why I'm not COPYing concurrently via multiple
threads/processes.  I will experiment w/o OID's and decreasing the
shared_buffers and wal_buffers.

Thanks,
Chris

-Original Message-
From: Gavin Sherry [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 26, 2005 7:12 AM
To: Chris Isaacson
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] COPY insert performance


Hi Chris,

Have you considered breaking the data into multiple chunks and COPYing
each concurrently?

Also, have you ensured that your table isn't storing OIDs?

On Mon, 25 Jul 2005, Chris Isaacson wrote:

 #-
 --
 
 # RESOURCE USAGE (except WAL)

#---
 
 shared_buffers = 65536  # min 16, at least max_connections*2, 8KB each

shared_buffers that high has been shown to affect performance. Try
12000.

 wal_buffers = 64  # min 4, 8KB each

Increasing wal_buffers can also have an effect on performance.

Thanks,

Gavin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] COPY insert performance

2005-07-26 Thread Jeff Trout


On Jul 26, 2005, at 8:15 AM, Chris Isaacson wrote:


I am using InnoDB with MySQL which appears to enforce true transaction
support.  (http://dev.mysql.com/doc/mysql/en/innodb-overview.html)  If
not, how is InnoDB cheating?



are you sure your tables are innodb?
chances are high unless you explcitly stated type = innodb when  
creating that they are myisam.


look at show table status output to verify.



I've played with the checkpoint_segments.  I noticed an enormous
improvement increasing from the default to 40, but neglible  
improvement

thereafter.  Do you have a recommendation for a value?


there's been a thread on -hackers recently about checkpoint issues..  
in a nut shell there isn't much to do.  But I'd say give bizgres a  
try if you're going to be continually loading huge amounts of data.


--
Jeff Trout [EMAIL PROTECTED]
http://www.jefftrout.com/
http://www.stuarthamm.net/



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] COPY insert performance

2005-07-25 Thread John A Meinel
Chris Isaacson wrote:
 I need COPY via libpqxx to insert millions of rows into two tables.  One
 table has roughly have as many rows and requires half the storage.  In
 production, the largest table will grow by ~30M rows/day.  To test the
 COPY performance I split my transactions into 10,000 rows.  I insert
 roughly 5000 rows into table A for every 10,000 rows into table B.

 Table A has one unique index:

 order_main_pk UNIQUE, btree (cl_ord_id)

 Table B has 1 unique index and 2 non-unique indexes:

 order_transition_pk UNIQUE, btree (collating_seq)
 order_transition_ak2 btree (orig_cl_ord_id)
 order_transition_ak3 btree (exec_id)

Do you have any foreign key references?
If you are creating a table for the first time (or loading a large
fraction of the data), it is common to drop the indexes and foreign keys
first, and then insert/copy, and then drop them again.

Is InnoDB the backend with referential integrity, and true transaction
support? I believe the default backend does not support either (so it is
cheating to give you speed, which may be just fine for your needs,
especially since you are willing to run fsync=false).

I think moving pg_xlog to a dedicated drive (set of drives) could help
your performance. As well as increasing checkpoint_segments.

I don't know if you gain much by changing the bg_writer settings, if you
are streaming everything in at once, you probably want to have it
written out right away. My understanding is that bg_writer settings are
for the case where you have mixed read and writes going on at the same
time, and you want to make sure that the reads have time to execute (ie
the writes are not saturating your IO).

Also, is any of this tested under load? Having a separate process issue
queries while you are loading in data. Traditionally MySQL is faster
with a single process inserting/querying for data, but once you have
multiple processes hitting it at the same time, it's performance
degrades much faster than postgres.

You also seem to be giving MySQL 512M of ram to work with, while only
giving 2M/200M to postgres. (re)creating indexes uses
maintenance_work_mem, but updating indexes could easily use work_mem.
You may be RAM starved.

John
=:-



 My testing environment is as follows:
 -Postgresql 8.0.1
 -libpqxx 2.5.0
 -Linux 2.6.11.4-21.7-smp x86_64
 -Dual Opteron 246
 -System disk (postgres data resides on this SCSI disk) -  Seagate
 (ST373453LC) - 15K, 73 GB
 (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.html)
 -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside
 on this disk
 -NO RAID

 *PostgreSQL*
 Here are the results of copying in 10M rows as fast as possible:
 (10K/transaction)
 Total Time:1129.556 s
 Rows/sec: 9899.922
 Transaction1.2s225
 Transaction1.5s 77
 Transaction2.0s  4
 Max Transaction   2.325s

 **MySQL**
 **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced these
 results: (I used MySQL's INSERT INTO x VALUES
 (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)
 Total Time: 860.000 s
 Rows/sec:11627.91
 Transaction1.2s  0
 Transaction1.5s  0
 Transaction2.0s  0
 Max Transaction   1.175s

 Considering the configurations shown below, can anyone offer advice to
 close the 15% gap and the much worse variability I'm experiencing.  Thanks

 My *postgresql.conf* has the following non-default values:
 # -
 # PostgreSQL configuration file
 # -
 listen_addresses = '*' # what IP interface(s) to listen on;
 max_connections = 100
 #---
 # RESOURCE USAGE (except WAL)
 #---
 shared_buffers = 65536  # min 16, at least max_connections*2, 8KB each
 work_mem = 2048   # min 64, size in KB
 maintenance_work_mem = 204800 # min 1024, size in KB
 max_fsm_pages = 225  # min max_fsm_relations*16, 6 bytes each
 bgwriter_delay = 200  # 10-1 milliseconds between rounds
 bgwriter_percent = 10  # 0-100% of dirty buffers in each round
 bgwriter_maxpages = 1000 # 0-1000 buffers max per round
 #---
 # WRITE AHEAD LOG
 #---
 fsync = false   # turns forced synchronization on or off
 wal_buffers = 64  # min 4, 8KB each
 checkpoint_segments = 40 # in logfile segments, min 1, 16MB each
 checkpoint_timeout = 600 # range 30-3600, in seconds
 #---
 # QUERY TUNING
 #---
 effective_cache_size = 65536 # typically 8KB each
 random_page_cost = 2  # units are one sequential page fetch cost
 #---
 # ERROR 

Re: [PERFORM] COPY insert performance

2005-07-25 Thread Luke Lonergan
Title: Re: [PERFORM] COPY insert performance



Chris,

You can try the Bizgres distribution of postgres (based on version 8.0.3), the COPY support is 30% faster as reported by OSDL (without indexes). This is due to very slow parsing within the COPY command, which is sped up using micro-optimized logic for parsing. There is a patch pending for the development version of Postgres which implements the same code, but you can use Bizgres and get it now instead of waiting for postgres 8.1 to come out. Also, Bizgres is QA tested with the enhanced features.

Bizgres is a free / open source distribution of Postgres for Business Intelligence / Data Warehousing. 

Bizgres currently features postgres 8.0.3 plus these patches:
Bypass WAL when performing CREATE TABLE AS SELECT
COPY is between 30% and 90% faster on machines with fast I/O
Enhanced support for data partitioning with partition elimination optimization
Bitmap Scan support for multiple index use in queries and better low cardinality column performance
Improved optimization of queries with LIMIT

See: http://www.bizgres.org for more.

- Luke


On 7/25/05 3:32 PM, Chris Isaacson [EMAIL PROTECTED] wrote:

I need COPY via libpqxx to insert millions of rows into two tables. One table has roughly have as many rows and requires half the storage. In production, the largest table will grow by ~30M rows/day. To test the COPY performance I split my transactions into 10,000 rows. I insert roughly 5000 rows into table A for every 10,000 rows into table B.
 
Table A has one unique index:
 
order_main_pk UNIQUE, btree (cl_ord_id)
 
Table B has 1 unique index and 2 non-unique indexes:
 
order_transition_pk UNIQUE, btree (collating_seq)
order_transition_ak2 btree (orig_cl_ord_id)
order_transition_ak3 btree (exec_id)
 
My testing environment is as follows:
-Postgresql 8.0.1
-libpqxx 2.5.0
-Linux 2.6.11.4-21.7-smp x86_64 
-Dual Opteron 246
-System disk (postgres data resides on this SCSI disk) -  Seagate (ST373453LC) - 15K, 73 GB (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.html)
-2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside on this disk
-NO RAID

PostgreSQL
Here are the results of copying in 10M rows as fast as possible: (10K/transaction)
Total Time: 1129.556 s
Rows/sec: 9899.922
Transaction1.2s 225
Transaction1.5s 77
Transaction2.0s 4
Max Transaction 2.325s

MySQL
I ran a similar test with MySQL 4.1.10a (InnoDB) which produced these results: (I used MySQL's INSERT INTO x VALUES (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)
Total Time: 860.000 s
Rows/sec: 11627.91
Transaction1.2s 0
Transaction1.5s 0
Transaction2.0s 0
Max Transaction 1.175s

Considering the configurations shown below, can anyone offer advice to close the 15% gap and the much worse variability I'm experiencing. Thanks
 
My postgresql.conf has the following non-default values:
# -
# PostgreSQL configuration file
# -
listen_addresses = '*' # what IP interface(s) to listen on; 
max_connections = 100
#---
# RESOURCE USAGE (except WAL)
#---
shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each
work_mem = 2048 # min 64, size in KB
maintenance_work_mem = 204800 # min 1024, size in KB
max_fsm_pages = 225 # min max_fsm_relations*16, 6 bytes each
bgwriter_delay = 200 # 10-1 milliseconds between rounds
bgwriter_percent = 10 # 0-100% of dirty buffers in each round
bgwriter_maxpages = 1000 # 0-1000 buffers max per round
#---
# WRITE AHEAD LOG
#---
fsync = false # turns forced synchronization on or off
wal_buffers = 64 # min 4, 8KB each
checkpoint_segments = 40 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 600 # range 30-3600, in seconds
#---
# QUERY TUNING
#---
effective_cache_size = 65536 # typically 8KB each
random_page_cost = 2 # units are one sequential page fetch cost
#---
# ERROR REPORTING AND LOGGING
#--- 
log_min_duration_statement = 250 # -1 is disabled, in milliseconds.
log_connections = true
log_disconnections = true
log_duration = true
log_line_prefix = '%r%u%p%t%d%%' # e.g. '%u%%%d ' 
# %u=user name %d=database name
# %r=remote host and port
# %p=PID %t=timestamp %i=command tag
# %c=session id %l=session line number
# %s=session start timestamp %x=transaction id
# %q=stop here in non-session processes
# %%='%'
log_statement = 'none' # none, mod, ddl, all