Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune
before and everything worked fine for me.
And now i have ~93% cpu load. Here's changed values of config:
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune
before and everything worked fine for me.
And now i have ~93% cpu load. Here's changed values of config:
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target =
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
1. create database
2. dump the data from the old database
3. load the data into the new database
4. analyze etc. (I prefer to do this manually at the
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
Was the database analyzed recently?
Hm... there was smth like auto analyzer in serverlog when i started it
first time, but i didn't mention that.
Should
OK, this seems to be the cause. What were the original values of the
config variables? If you've lowered the work_mem and you need to sort a
lot of data, this may be a problem. What amounts of data are you working
with? If the data were not analyzed recently, the execution plans will be
My additional comments:
t...@fuzzy.cz wrote:
...
For future upgrade, what is the basic steps?
0. Create test database - work out bugs and performance issues before
going live.
1. create database
...cluster. You only need to create the individual database if the
options you
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
-Kevin
--
Sent via pgsql-performance mailing list
Kevin Grittner wrote:
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
Was going on memory from a presentation I
On Thu, 30 Jul 2009, Kevin Grittner wrote:
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
It depends on a lot of things.
Matthew Wakeling matt...@flymine.org wrote:
tests on servers over here have indicated that running four CREATE
INDEX statements at the time runs four times as fast, assuming the
table fits in maintenance_work_mem.
I'm benchmarking a patch to the parallel restore, and just out of
curiosity
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a good reason for
Tom Lane t...@sss.pgh.pa.us wrote:
Hmmm ... AFAIR there isn't a good reason for dump to custom format
to take longer than plain text dump, except for applying
compression. Maybe -Z0 would be worth testing? Or is the problem
that you have to write the data to a disk file rather than just
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearly longer for our databases on our hardware.
Hmmm
Tom Lane wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearly longer for our databases on
On 7/30/09 11:14 AM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
Scott Carey sc...@richrelevance.com wrote:
Max GZIP compression speed with the newest Intel CPU's is something
like 50MB/sec (it is data dependant, obviously -- it is usually
closer to 30MB/sec).
Applying 30MB/sec to the 70GB accounts for 40 minutes. If those
numbers are good, there's
On 7/30/09 11:14 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Tom Lane wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is
On 7/30/09 11:24 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken,
On 30-7-2009 20:46 Scott Carey wrote:
Of course Compression has a HUGE effect if your I/O system is half-decent.
Max GZIP compression speed with the newest Intel CPU's is something like
50MB/sec (it is data dependant, obviously -- it is usually closer to
30MB/sec). Max gzip decompression ranges
Scott Carey sc...@richrelevance.com writes:
Dump needs to be parallelized or at least pipelined to use more cores. COPY
on one thread, compression on another?
We already do that (since compression happens on the pg_dump side).
regards, tom lane
--
Sent via
On 7/30/09 11:58 AM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Scott Carey sc...@richrelevance.com wrote:
Max GZIP compression speed with the newest Intel CPU's is something
like 50MB/sec (it is data dependant, obviously -- it is usually
closer to 30MB/sec).
Applying 30MB/sec
On 7/30/09 1:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Dump needs to be parallelized or at least pipelined to use more cores. COPY
on one thread, compression on another?
We already do that (since compression happens on the pg_dump side).
Scott Carey sc...@richrelevance.com wrote:
Now, what needs to be known with the pg_dump is not just how fast
compression can go (assuming its gzip) but also what the duty cycle
time of the compression is. If it is single threaded, there is all
the network and disk time to cut out of this,
On 7/30/09 1:58 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Scott Carey sc...@richrelevance.com wrote:
Now, what needs to be known with the pg_dump is not just how fast
compression can go (assuming its gzip) but also what the duty cycle
time of the compression is. If it is
Scott Carey sc...@richrelevance.com wrote:
Gzip does have some quirky performance behavior depending on the
chunk size of data you stream into it.
Yeah, I've run into that before. If we're sending each individual
datum to a gzip function rather than waiting until we've got a
decent-size
On Wed, Jul 29, 2009 at 12:47 PM, Doug Hunleyd...@hunley.homeip.net wrote:
When reviewing the vacuum logs, I notice that on any given day
autovacuum only seems to touch four of the tables in one of our
schemas (not counting toast tables). However, if I look at the
pgstatspack output for the
Kevin Grittner kevin.gritt...@wicourts.gov writes:
One thing I've been wondering about is what, exactly, is compressed in
custom format. Is it like a .tar.gz file, where the compression is a
layer over the top, or are individual entries compressed?
Individual entries. Eyeball examination of
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's anything to worry about there.
regards,
Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the
chunk size of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's anything to worry about
Hey guyz, thanks for help. I solved the problems. The reason was in
bad query, that i've accidentally committed right after upgrading.
PostgreSQL 8.4 is perfect! Analyze works like a charm, and MUCH better
than in 8.3.
2009/7/31 Kevin Grittner kevin.gritt...@wicourts.gov:
Tom Lane
On 7/30/09 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's
Scott Carey sc...@richrelevance.com writes:
On 7/30/09 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the performance
implications of that, but it does seem like it might be something to
look into.
I suspect if
Greg Stark gsst...@mit.edu wrote:
I suspect if this was a problem the zlib people would have added
internal buffering ages ago. I find it hard to believe we're not the
first application to use it this way.
I think that most uses of this library are on entire files or streams.
They may have
On Thu, 30 Jul 2009, Rauan Maemirov wrote:
maintenance_work_mem = 1GB
work_mem = 192MB
shared_buffers = 7680MB
max_connections = 80
My box is Nehalem 2xQuad 2.8 with RAM 32Gb
While it looks like you sorted out your issue downthread, I wanted to
point out that your setting for work_mem could
Hi,
I would like to know if my configuration is ok, We run a web application with
high transaction rate and the database machine on Mondays / Tuesdays is always
at 100% CPU with no IO/Wait . the machine is a Dual Xeon Quad core, 12gb RAM,
4gb/s Fibre Channel on Netapp SAN, with pg_xlog on
Your settings look reasonable, I'd bump up checkpoint_segments to at least
double where you've got it set at now to lower general overhead a bit. I
doubt that will help you much though.
If you're at 100% CPU with no I/O wait, typically that means you have some
heavy queries running that are
On Thu, Jul 30, 2009 at 10:10 PM, Greg Smithgsm...@gregsmith.com wrote:
On Thu, 30 Jul 2009, Rauan Maemirov wrote:
maintenance_work_mem = 1GB
work_mem = 192MB
shared_buffers = 7680MB
max_connections = 80
My box is Nehalem 2xQuad 2.8 with RAM 32Gb
While it looks like you sorted out your
39 matches
Mail list logo