Re: [PERFORM] pg_dump performance

2007-12-27 Thread Gregory Stark
Jared Mauch [EMAIL PROTECTED] writes:

   pg_dump is utilizing about 13% of the cpu and the
 corresponding postgres backend is at 100% cpu time.
 (multi-core, multi-cpu, lotsa ram, super-fast disk).
...
   pg8.3(beta) with the following variances from default

 checkpoint_segments = 300# in logfile segments, min 1, 16MB each
 effective_cache_size = 512MB# typically 8KB each
 wal_buffers = 128MB# min 4, 8KB each
 shared_buffers = 128MB# min 16, at least max_connections*2, 8KB 
 each
 work_mem = 512MB # min 64, size in KB

Fwiw those are pretty unusual numbers. Normally work_mem is much smaller than
shared_buffers since you only need one block of memory for shared buffers and
work_mem is for every query (and every sort within those queries). If you have
ten queries running two sorts each this setting of work_mem could consume 5GB.

Raising shared buffers could improve your pg_dump speed. If all the data is in
cache it would reduce the time spend moving data between filesystem cache and
postgres shared buffers.

What made you raise wal_buffers so high? I don't think it hurts but that's a
few orders of magnitude higher than what I would expect to help.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] pg_dump performance

2007-12-27 Thread Jared Mauch
On Thu, Dec 27, 2007 at 01:14:25PM +, Gregory Stark wrote:
 Jared Mauch [EMAIL PROTECTED] writes:
 
  pg_dump is utilizing about 13% of the cpu and the
  corresponding postgres backend is at 100% cpu time.
  (multi-core, multi-cpu, lotsa ram, super-fast disk).
 ...
  pg8.3(beta) with the following variances from default
 
  checkpoint_segments = 300# in logfile segments, min 1, 16MB each
  effective_cache_size = 512MB# typically 8KB each
  wal_buffers = 128MB# min 4, 8KB each
  shared_buffers = 128MB# min 16, at least max_connections*2, 8KB 
  each
  work_mem = 512MB # min 64, size in KB
 
 Fwiw those are pretty unusual numbers. Normally work_mem is much smaller than
 shared_buffers since you only need one block of memory for shared buffers and
 work_mem is for every query (and every sort within those queries). If you have
 ten queries running two sorts each this setting of work_mem could consume 5GB.

I'd still have lots of ram left :)

I'm dealing with normal query results that end up matching 5-10 million
rows based on the index (starttime) not counting the filter afterwards.  Each
backend rarely makes it over 256m.

 Raising shared buffers could improve your pg_dump speed. If all the data is in
 cache it would reduce the time spend moving data between filesystem cache and
 postgres shared buffers.

I doubt it's all in cache, but I can look at this.  I did not do a
lot of fine tuning of numbers, just enough to get past the defaults and have
an acceptable amount of performance.

 What made you raise wal_buffers so high? I don't think it hurts but that's a
 few orders of magnitude higher than what I would expect to help.

I'm adding chunks of ~1.2m rows every other minute.  Once I increase
my data collection pool, this will go up to around [1]2-3m rows or so.  I
found having higher wal and checkpoint helped.  I didn't spend a lot of time
tweaking these options.  Is there some way you know to determine high 
watermark numbers for what is being used?

- Jared

[1] - I am concerned that with my 'insert' speed being around 100k/sec
  and raw pg_dump speed being around 182k/sec i will start getting data
  faster than can be stored and postprocessed.

-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] pg_dump performance

2007-12-26 Thread Heikki Linnakangas

Jared Mauch wrote:

pg_dump is utilizing about 13% of the cpu and the
corresponding postgres backend is at 100% cpu time.
(multi-core, multi-cpu, lotsa ram, super-fast disk).
...
	Any tips on getting pg_dump (actually the backend) to perform 
much closer to 500k/sec or more?  This would also aide me when I upgrade 
pg versions and need to dump/restore with minimal downtime (as the data 
never stops coming.. whee).


I would suggest running oprofile to see where the time is spent. There 
might be some simple optimizations that you could do at the source level 
that would help.


Where the time is spent depends a lot on the schema and data. For 
example, I profiled a pg_dump run on a benchmark database a while ago, 
and found that most of the time was spent in sprintf, formatting 
timestamp columns. If you have a lot of timestamp columns that might be 
the bottleneck for you as well, or something else.


Or if you can post the schema for the table you're dumping, maybe we can 
 make a more educated guess.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] pg_dump performance

2007-12-26 Thread Jared Mauch
On Wed, Dec 26, 2007 at 10:52:08PM +0200, Heikki Linnakangas wrote:
 Jared Mauch wrote:
  pg_dump is utilizing about 13% of the cpu and the
 corresponding postgres backend is at 100% cpu time.
 (multi-core, multi-cpu, lotsa ram, super-fast disk).
 ...
  Any tips on getting pg_dump (actually the backend) to perform much 
 closer 
 to 500k/sec or more?  This would also aide me when I upgrade pg versions 
 and need to dump/restore with minimal downtime (as the data never stops 
 coming.. whee).

 I would suggest running oprofile to see where the time is spent. There 
 might be some simple optimizations that you could do at the source level 
 that would help.

 Where the time is spent depends a lot on the schema and data. For example, 
 I profiled a pg_dump run on a benchmark database a while ago, and found 
 that most of the time was spent in sprintf, formatting timestamp columns. 
 If you have a lot of timestamp columns that might be the bottleneck for you 
 as well, or something else.

 Or if you can post the schema for the table you're dumping, maybe we can  
 make a more educated guess.

here's the template table that they're all copies
of:

CREATE TABLE template_flowdatas (
routerip inet,
starttime integer,
srcip inet,
dstip inet,
srcifc smallint,
dstifc smallint,
srcasn integer,
dstasn integer,
proto smallint,
srcport integer,
dstport integer,
flowlen integer,
tcpflags smallint,
tosbit smallint
);


-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] pg_dump performance

2007-12-26 Thread Heikki Linnakangas

Jared Mauch wrote:

On Wed, Dec 26, 2007 at 10:52:08PM +0200, Heikki Linnakangas wrote:

Jared Mauch wrote:

pg_dump is utilizing about 13% of the cpu and the
corresponding postgres backend is at 100% cpu time.
(multi-core, multi-cpu, lotsa ram, super-fast disk).
...
	Any tips on getting pg_dump (actually the backend) to perform much closer 
to 500k/sec or more?  This would also aide me when I upgrade pg versions 
and need to dump/restore with minimal downtime (as the data never stops 
coming.. whee).
I would suggest running oprofile to see where the time is spent. There 
might be some simple optimizations that you could do at the source level 
that would help.


Where the time is spent depends a lot on the schema and data. For example, 
I profiled a pg_dump run on a benchmark database a while ago, and found 
that most of the time was spent in sprintf, formatting timestamp columns. 
If you have a lot of timestamp columns that might be the bottleneck for you 
as well, or something else.


Or if you can post the schema for the table you're dumping, maybe we can  
make a more educated guess.


here's the template table that they're all copies
of:

CREATE TABLE template_flowdatas (
routerip inet,
starttime integer,
srcip inet,
dstip inet,
srcifc smallint,
dstifc smallint,
srcasn integer,
dstasn integer,
proto smallint,
srcport integer,
dstport integer,
flowlen integer,
tcpflags smallint,
tosbit smallint
);


I run a quick oprofile run on my laptop, with a table like that, filled 
with dummy data. It looks like indeed ~30% of the CPU time is spent in 
sprintf, to convert the integers and inets to string format. I think you 
could speed that up by replacing the sprintf calls in int2, int4 and 
inet output functions with faster, customized functions. We don't need 
all the bells and whistles of sprintf, which gives the opportunity to 
optimize.



A binary mode dump should go a lot faster, because it doesn't need to do 
those conversions, but binary dumps are not guaranteed to work across 
versions.


BTW, the profiling I did earlier led me to think this should be 
optimized in the compiler. I started a thread about that on the gcc 
mailing list but got busy with other stuff and didn't follow through 
that idea: http://gcc.gnu.org/ml/gcc/2007-10/msg00073.html


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] pg_dump performance

2007-12-26 Thread Jared Mauch
On Wed, Dec 26, 2007 at 11:35:59PM +0200, Heikki Linnakangas wrote:
 I run a quick oprofile run on my laptop, with a table like that, filled 
 with dummy data. It looks like indeed ~30% of the CPU time is spent in 
 sprintf, to convert the integers and inets to string format. I think you 
 could speed that up by replacing the sprintf calls in int2, int4 and inet 
 output functions with faster, customized functions. We don't need all the 
 bells and whistles of sprintf, which gives the opportunity to optimize.

Hmm.  Given the above+below perhaps there's something that can
be tackled in the source here.. will look at poking around in there ...
our sysadmin folks don't like the idea of running patched stuff (aside from
conf changes) as they're concerned about losing patches btw upgrades.

I'm waiting on one of my hosts in Japan to come back online
so perhaps I can hack the source and attempt some optimization
after that point.  It's not the beefy host that I have this on
though and not even multi-{core,cpu} so my luck may be poor.

 A binary mode dump should go a lot faster, because it doesn't need to do 
 those conversions, but binary dumps are not guaranteed to work across 
 versions.

I'll look at this.  Since this stuff is going into something else
perhaps I can get it to be slightly faster to not convert from binary -
string - binary(memory) again.  A number of the columns are unused in my
processing and some are used only when certain criteria are met (some
are always used).

 BTW, the profiling I did earlier led me to think this should be optimized 
 in the compiler. I started a thread about that on the gcc mailing list but 
 got busy with other stuff and didn't follow through that idea: 
 http://gcc.gnu.org/ml/gcc/2007-10/msg00073.html

(* drift=off mode=drifting-fast *)
I'd have to say after a quick review of this, it does look
like they're right and it should go somewhat in the C lib.  I'm on
Solaris 10 with my host.  There may be some optimizations that the compiler
could do when linking the C library but I currently think they're on
sound footing.

(* drift=off  mode=end *)

- Jared


-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] pg_dump performance?

2004-08-05 Thread Christopher Kings-Lynne
Is it the dump or the restore that's really slow?
Chris
Jesper Krogh wrote:
I have a database that I should migrate from 7.3 - 7.4.3 but pg_dump |
psql seems to take forever. (Several hours) Is there anything that can I 
do to speed it up?

The databse is primary a table with 300.000 records of about 200Kbytes
each. ~ 60 GB. 

This is becoming an issue with the daily backup too.. (running pg_dump
over night )
Jesper
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] pg_dump performance?

2004-08-05 Thread Jesper Krogh
I gmane.comp.db.postgresql.performance, skrev Christopher Kings-Lynne:
  Is it the dump or the restore that's really slow?

Primarily the dump, it seems to be CPU-bound on the postmaster' process. 

No signs on IO-bottleneck when I try to monitor with iostat or vmstat

-- 
./Jesper Krogh, [EMAIL PROTECTED]
Jabber ID: [EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster