Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-29 Thread John Rouillard
On Tue, Apr 29, 2008 at 05:19:59AM +0930, Shane Ambler wrote:
 John Rouillard wrote:
 
 We can't do this as we are backfilling a couple of months of data 
 into tables with existing data.
 
 Is this a one off data loading of historic data or an ongoing thing?

Yes it's a one off bulk data load of many days of data. The daily
loads will also take 3 hour's but that is ok since we only do those
once a day so we have 21 hours of slack in the schedule 8-).

 The only indexes we have to drop are the ones on the primary keys
  (there is one non-primary key index in the database as well).
 
 If this amount of data importing is ongoing then one thought I would try
 is partitioning (this could be worthwhile anyway with the amount of data
 you appear to have).
 Create an inherited table for the month being imported, load the data 
 into it, then add the check constraints, indexes, and modify the 
 rules/triggers to handle the inserts to the parent table.

Hmm, interesting idea, worth considering if we have to do this again
(I hope not). 

Thaks for the reply.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-29 Thread John Rouillard
On Mon, Apr 28, 2008 at 02:16:02PM -0400, Greg Smith wrote:
 On Mon, 28 Apr 2008, John Rouillard wrote:
 
2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds 
apart)
  so I changed:
   checkpoint_segments = 30
   checkpoint_warning = 150
 
 That's good, but you might go higher than 30 for a bulk loading operation 
 like this, particularly on 8.1 where checkpoints are no fun.  Using 100 is 
 not unreasonable.

Ok. I can do that. I chose 30 to make the WAL logs span the 5 minute

  checkpoint_timeout = 300

so that the 30 segments wouldn't wrap over before the 5 minute
checkpoint that usually occurs. Maybe I should increase both the
timeout and the segments?
 
 shared_buffers = 3000
 I don't see any indication in the docs that increasing shared memory
 would help speed up a copy operation.
 
 The index blocks use buffer space, and what ends up happening if there's 
 not enough memory is they are written out more than they need to be (and 
 with your I/O hardware you need to avoid writes unless absolutely 
 necessary).

I forgot to mention the raid 1/0 is on a 3ware 9550SX-4LP raid card
setup as raid 1/0. The write cache is on and autoverify is turned off.

 Theoretically the OS is caching around that situation but 
 better to avoid it. 

The system is using 6-8MB of memory for cache.

 You didn't say how much RAM you have,

16GB total, but 8GB or so is taken up with other processes.

 but you should 
 start by a factor of 10 increase to 30,000 and see if that helps; if so, 
 try making it large enough to use 1/4 of total server memory.  3000 is 
 only giving the server 24MB of RAM to work with, and it's unfair to expect 
 it to work well in that situation.

So swap the memory usage from the OS cache to the postgresql process.
Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a
better setting. So I'll try 30 to start (1/8 of memory) and see
what it does to the other processes on the box.
 
 While not relevant to this exercise you'll need to set 
 effective_cache_size to a useful value one day as well.

This is a very lightly loaded database, a few queries/hour usually
scattered across the data set, so hopefully that won't be much of an
issue.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-29 Thread Greg Smith

On Tue, 29 Apr 2008, John Rouillard wrote:


So swap the memory usage from the OS cache to the postgresql process.
Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a
better setting. So I'll try 30 to start (1/8 of memory) and see
what it does to the other processes on the box.


That is potentially a good setting.  Just be warned that when you do hit a 
checkpoint with a high setting here, you can end up with a lot of data in 
memory that needs to be written out, and under 8.2 that can cause an ugly 
spike in disk writes.  The reason I usually threw out 30,000 as a 
suggested starting figure is that most caching disk controllers can buffer 
at least 256MB of writes to keep that situation from getting too bad. 
Try it out and see what happens, just be warned that's the possible 
downside of setting shared_buffers too high and therefore you might want 
to ease into that more gradually (particularly if this system is shared 
with other apps).

x
--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-28 Thread John Rouillard
Hi all:

We are loading in a number (100+) of sql files that are about 100M in
size.  It takes about three hours to load the file. There is very
little load on the database other than the copy from operations.

We are running postgresql-8.1.3 under Centos 4 on a RAID 1/0 array
with 4 disks (so we have only one spindle). The partitions are set up
in an LVM and iostat 5 shows (for one report):

  avg-cpu: %user %nice %sys %iowait %idle
   1.70  0.00  0.80 51.40   46.10

  Device:  tps Blk_read/s  Blk_wrtn/s  Blk_read  Blk_wrtn
  sda  179.20  1472.00 2561.60 7360  12808
  sda1 0.000.000.000 0
  sda2 385.20  1462.40 2561.60 7312  12808
  dm-0 0.800.006.400 32
  dm-1 0.000.000.000 0
  dm-2 0.000.000.000 0
  dm-3 0.000.000.000 0
  dm-4 4.400.0035.20   0 176
  dm-5 0.000.000.000 0
  dm-6 380.00  1462.40 2520.00 7312  12600

dm-6 is where the data files reside and dm-4 is where the WAL archives
are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.

A sample psql command file to load the data is:

  BEGIN;
  COPY peers (observe_start, observe_end, geo_scope, geo_value,
  peer_a, peer_b) FROM stdin WITH NULL AS '';
  (data here)
  3 more copy commands to different tables w/ data
  COMMIT;

The primary keys for the tables being loaded are composite keys using
4-7 columns, so that may be part of the issue.

From postgres.conf

  shared_buffers = 3000
  #temp_buffers = 1000# min 100, 8KB each
  #max_prepared_transactions = 5  # can be 0 or more
  max_locks_per_transaction).
  work_mem = 2048 # min 64, size in KB
  maintenance_work_mem = 65536# min 1024, size in KB
  #max_stack_depth = 2048 # min 100, size in KB

The prior settings for work_mem/maintenance_work_mem were the
defaults:

  #work_mem = 1024   # min 64, size in KB
  #maintenance_work_mem = 16384  # min 1024, size in KB

I also took a look at disk-io hit rates:

# select * from pg_statio_user_tables;  
 relid | schema |   relname| heap_blks_read | heap_blks_hit | idx_blks_read 
| idx_blks_hit |
---++--++---+---+--+
 17282 | public | providers|  179485097 |  78832253 |835008 
|196903582 |
 17264 | public | events   |  0 | 0 |   
|  |
 17262 | public | days |495 |   219 |   478 
|   16 |
 17276 | public | peers|  147435004 | 114304828 |   1188908 
|295569499 |
 17288 | public | rankings |  564638938 | 345456664 | 275607291 
|   1341727605 |
 17270 | public | market_share | 131932 | 90048 |  5408 
|   182100 |

market_share did have one tidx_blks_read reported, but all the other
fields (toast_blks_read, toast_blks_hit, tidx_blks_read,
tidx_blks_hit) were empty for all rows.

This looks like we have whole indexes in memory except for the days
table, which has a low update rate, so I am not worried about that.

However for the heap_blks_read and heap_blks_hit we get a different
story:

  relname   | hit_percent
  --+---
  providers | 43.92
  days  | 44.24
  peers | 77.52
  rankings  | 61.18
  market_share  | 68.25

so we see a 43 % hit ratio for providers to 77% hit ratio for
peers. Not horrible hit rates given that we are more data warehousing
than OLTP, but I am not sure what effect increasing these (by
increasing shared_buffers I think) will have on the COPY operation. I
would suspect none.

To try to solve this speed issue:

 I checked the logs and was seeing a few

2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)

 of these, so I changed:

   checkpoint_segments = 30 
   checkpoint_warning = 150

 in postgres.conf and reloaded postgres. I have only seen one of these
 log messages in the past week.
 
 I have turned of autovacuum.

 I have increased the maintenance_work_mem as mentioned
   above. (Although I didn't expect it to do anything unless we
   drop/recreate indexes).

 I have increased work_mem as mentioned above.

The only things I can think of is increasing shared memory, or
dropping indexes.

I don't see any indication in the docs that increasing shared memory
would help speed up a copy operation.

The only indexes we have to drop are the ones on the primary keys
(there is one non-primary key index in the database as well).

Can you drop an index on the primary key for a table and add it back
later?  Am I correct in saying: the primary key index is what enforces
the unique constraint in the table? If the index 

Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-28 Thread Heikki Linnakangas

John Rouillard wrote:

We are running postgresql-8.1.3 under Centos 4


You should upgrade, at least to the latest minor release of the 8.1 
series (8.1.11), as there has been a bunch of important bug and security 
fixes. Or even better, upgrade to 8.3, which has reduced the storage 
size of especially variable length datatypes like text/char/varchar in 
particular. As your COPY is I/O bound, reducing storage size will 
translate directly to improved performance.



dm-6 is where the data files reside and dm-4 is where the WAL archives
are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.


Another reason to upgrade to 8.3: if you CREATE or TRUNCATE the table in 
the same transaction as you COPY into it, you can avoid WAL logging of 
the loaded data, which will in the best case double your performance as 
your WAL is on the same physical drives as the data files.



The only indexes we have to drop are the ones on the primary keys
(there is one non-primary key index in the database as well).

Can you drop an index on the primary key for a table and add it back
later?  Am I correct in saying: the primary key index is what enforces
the unique constraint in the table? If the index is dropped and
non-unique primary key data has been added, what happens when you
re-add the index?


Yes, the index is what enforces the uniqueness. You can drop the primary 
key constraint, and add it back after the load with ALTER TABLE. If the 
load introduces any non-unique primary keys, adding the primary key 
constraint will give you an error and fail.


Dropping and recreating the indexes is certainly worth trying.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-28 Thread John Rouillard
On Mon, Apr 28, 2008 at 06:53:09PM +0100, Heikki Linnakangas wrote:
 John Rouillard wrote:
 We are running postgresql-8.1.3 under Centos 4
 You should upgrade, at least to the latest minor release of the 8.1 
 series (8.1.11), as there has been a bunch of important bug and security 
 fixes. Or even better, upgrade to 8.3, which has reduced the storage 
 size of especially variable length datatypes like text/char/varchar in 
 particular. As your COPY is I/O bound, reducing storage size will 
 translate directly to improved performance.

Yup. Just saw that suggestion in an unrelated email.
 
 dm-6 is where the data files reside and dm-4 is where the WAL archives
 are kept. Note all the DM's are on the same RAID 0 device /dev/sda2.
 
 Another reason to upgrade to 8.3: if you CREATE or TRUNCATE the table in 
 the same transaction as you COPY into it, you can avoid WAL logging of 
 the loaded data, which will in the best case double your performance as 
 your WAL is on the same physical drives as the data files.

We can't do this as we are backfilling a couple of months of data into
tables with existing data.
 
 The only indexes we have to drop are the ones on the primary keys
 (there is one non-primary key index in the database as well).
 
 Can you drop an index on the primary key for a table and add it back
 later?  Am I correct in saying: the primary key index is what enforces
 the unique constraint in the table? If the index is dropped and
 non-unique primary key data has been added, what happens when you
 re-add the index?
 
 Yes, the index is what enforces the uniqueness. You can drop the primary 
 key constraint, and add it back after the load with ALTER TABLE. If the 
 load introduces any non-unique primary keys, adding the primary key 
 constraint will give you an error and fail.

That's the part I am worried about. I guess using psql to delete the
problem row then re-adding the index will work.
 
 Dropping and recreating the indexes is certainly worth trying.

Thanks for the info.

-- 
-- rouilj

John Rouillard
System Administrator
Renesys Corporation
603-244-9084 (cell)
603-643-9300 x 111

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-28 Thread Greg Smith

On Mon, 28 Apr 2008, John Rouillard wrote:


   2008-04-21 11:36:43 UTC @(2761)i: LOG: checkpoints ... (27 seconds apart)
so I changed:
  checkpoint_segments = 30
  checkpoint_warning = 150


That's good, but you might go higher than 30 for a bulk loading operation 
like this, particularly on 8.1 where checkpoints are no fun.  Using 100 is 
not unreasonable.



shared_buffers = 3000
I don't see any indication in the docs that increasing shared memory
would help speed up a copy operation.


The index blocks use buffer space, and what ends up happening if there's 
not enough memory is they are written out more than they need to be (and 
with your I/O hardware you need to avoid writes unless absolutely 
necessary).  Theoretically the OS is caching around that situation but 
better to avoid it.  You didn't say how much RAM you have, but you should 
start by a factor of 10 increase to 30,000 and see if that helps; if so, 
try making it large enough to use 1/4 of total server memory.  3000 is 
only giving the server 24MB of RAM to work with, and it's unfair to expect 
it to work well in that situation.


While not relevant to this exercise you'll need to set 
effective_cache_size to a useful value one day as well.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Very poor performance loading 100M of sql data using copy

2008-04-28 Thread Shane Ambler

John Rouillard wrote:

We can't do this as we are backfilling a couple of months of data 
into tables with existing data.


Is this a one off data loading of historic data or an ongoing thing?



The only indexes we have to drop are the ones on the primary keys
 (there is one non-primary key index in the database as well).


If this amount of data importing is ongoing then one thought I would try
is partitioning (this could be worthwhile anyway with the amount of data
you appear to have).
Create an inherited table for the month being imported, load the data 
into it, then add the check constraints, indexes, and modify the 
rules/triggers to handle the inserts to the parent table.




--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance