On Tue, Aug 4, 2009 at 4:40 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
There are a handful of other compression algorithms very similar to LZO in
performance / compression level under various licenses.
LZO is just the best known and most widely used.
On 8/5/09 7:12 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Aug 4, 2009 at 4:40 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
There are a handful of other compression algorithms very similar to LZO in
performance / compression level under various
On Mon, Aug 3, 2009 at 5:30 PM, PFCli...@peufeu.com wrote:
lzo is much, much, (much) faster than zlib. Note, I've tried several
decompression speed is even more awesome...
times to contact the author to get clarification on licensing terms
and have been unable to get a response.
lzop and
On 8/4/09 8:30 AM, Alvaro Herrera alvhe...@commandprompt.com wrote:
Merlin Moncure escribió:
On Mon, Aug 3, 2009 at 5:30 PM, PFCli...@peufeu.com wrote:
lzo is much, much, (much) faster than zlib. Note, I've tried several
decompression speed is even more awesome...
times to contact
Scott Carey sc...@richrelevance.com writes:
There are a handful of other compression algorithms very similar to LZO in
performance / compression level under various licenses.
LZO is just the best known and most widely used.
And after we get done with the license question, we need to ask about
On 7/31/09 4:01 PM, PFC li...@peufeu.com wrote:
On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark gsst...@mit.edu writes:
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I did some tracing and verified that pg_dump passes data to
Scott Carey sc...@richrelevance.com writes:
I get very different (contradictory) behavior. Server with fast RAID, 32GB
RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
8.3.6
No disk wait time during any test. One test beforehand was used to prime
the disk cache.
100% CPU in the below
On 8/3/09 11:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
I get very different (contradictory) behavior. Server with fast RAID, 32GB
RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
8.3.6
No disk wait time during any test. One test beforehand
I get very different (contradictory) behavior. Server with fast RAID,
32GB
RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
8.3.6
That's a very different serup from my (much less powerful) box, so that
would explain it...
No disk wait time during any test. One test beforehand was
On Mon, Aug 3, 2009 at 2:56 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I don't see anything very contradictory here. What you're demonstrating
is that it's nice to be able to throw a third CPU at the compression
part of the problem. That's likely to remain true if we shift to a
different
lzo is much, much, (much) faster than zlib. Note, I've tried several
decompression speed is even more awesome...
times to contact the author to get clarification on licensing terms
and have been unable to get a response.
lzop and the LZO library are distributed under the terms of the GNU
Scott Carey wrote:
On 7/30/09 11:24 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql
That's true. I tried to lower work_mem from 192 to 64, and it caused
total slowdown.
By the way, is there any performance tips for tuning joins? I noticed,
that my joins on 8.4 slowed down, on 8.3 it was faster a bit.
2009/7/31 Scott Marlowe scott.marl...@gmail.com:
On Thu, Jul 30, 2009 at 10:10
On Thu, Jul 30, 2009 at 10:07 AM, Rauan Maemirovra...@maemirov.com wrote:
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
Was the database analyzed recently?
Hm... there was smth like auto analyzer
Greg Stark gsst...@mit.edu writes:
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the performance
implications of that, but it does seem like it might be
On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark gsst...@mit.edu writes:
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the
Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune
before and everything worked fine for me.
And now i have ~93% cpu load. Here's changed values of config:
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target =
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
1. create database
2. dump the data from the old database
3. load the data into the new database
4. analyze etc. (I prefer to do this manually at the
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
Was the database analyzed recently?
Hm... there was smth like auto analyzer in serverlog when i started it
first time, but i didn't mention that.
Should
OK, this seems to be the cause. What were the original values of the
config variables? If you've lowered the work_mem and you need to sort a
lot of data, this may be a problem. What amounts of data are you working
with? If the data were not analyzed recently, the execution plans will be
My additional comments:
t...@fuzzy.cz wrote:
...
For future upgrade, what is the basic steps?
0. Create test database - work out bugs and performance issues before
going live.
1. create database
...cluster. You only need to create the individual database if the
options you
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
-Kevin
--
Sent via pgsql-performance mailing list
Kevin Grittner wrote:
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
Was going on memory from a presentation I
On Thu, 30 Jul 2009, Kevin Grittner wrote:
Steve Crawford scrawf...@pinpointresearch.com wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
It depends on a lot of things.
Matthew Wakeling matt...@flymine.org wrote:
tests on servers over here have indicated that running four CREATE
INDEX statements at the time runs four times as fast, assuming the
table fits in maintenance_work_mem.
I'm benchmarking a patch to the parallel restore, and just out of
curiosity
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a good reason for
Tom Lane t...@sss.pgh.pa.us wrote:
Hmmm ... AFAIR there isn't a good reason for dump to custom format
to take longer than plain text dump, except for applying
compression. Maybe -Z0 would be worth testing? Or is the problem
that you have to write the data to a disk file rather than just
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearly longer for our databases on our hardware.
Hmmm
Tom Lane wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearly longer for our databases on
On 7/30/09 11:14 AM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
Scott Carey sc...@richrelevance.com wrote:
Max GZIP compression speed with the newest Intel CPU's is something
like 50MB/sec (it is data dependant, obviously -- it is usually
closer to 30MB/sec).
Applying 30MB/sec to the 70GB accounts for 40 minutes. If those
numbers are good, there's
On 7/30/09 11:14 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Tom Lane wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is
On 7/30/09 11:24 AM, Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken,
On 30-7-2009 20:46 Scott Carey wrote:
Of course Compression has a HUGE effect if your I/O system is half-decent.
Max GZIP compression speed with the newest Intel CPU's is something like
50MB/sec (it is data dependant, obviously -- it is usually closer to
30MB/sec). Max gzip decompression ranges
Scott Carey sc...@richrelevance.com writes:
Dump needs to be parallelized or at least pipelined to use more cores. COPY
on one thread, compression on another?
We already do that (since compression happens on the pg_dump side).
regards, tom lane
--
Sent via
On 7/30/09 11:58 AM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Scott Carey sc...@richrelevance.com wrote:
Max GZIP compression speed with the newest Intel CPU's is something
like 50MB/sec (it is data dependant, obviously -- it is usually
closer to 30MB/sec).
Applying 30MB/sec
On 7/30/09 1:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Dump needs to be parallelized or at least pipelined to use more cores. COPY
on one thread, compression on another?
We already do that (since compression happens on the pg_dump side).
Scott Carey sc...@richrelevance.com wrote:
Now, what needs to be known with the pg_dump is not just how fast
compression can go (assuming its gzip) but also what the duty cycle
time of the compression is. If it is single threaded, there is all
the network and disk time to cut out of this,
On 7/30/09 1:58 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Scott Carey sc...@richrelevance.com wrote:
Now, what needs to be known with the pg_dump is not just how fast
compression can go (assuming its gzip) but also what the duty cycle
time of the compression is. If it is
Scott Carey sc...@richrelevance.com wrote:
Gzip does have some quirky performance behavior depending on the
chunk size of data you stream into it.
Yeah, I've run into that before. If we're sending each individual
datum to a gzip function rather than waiting until we've got a
decent-size
Kevin Grittner kevin.gritt...@wicourts.gov writes:
One thing I've been wondering about is what, exactly, is compressed in
custom format. Is it like a .tar.gz file, where the compression is a
layer over the top, or are individual entries compressed?
Individual entries. Eyeball examination of
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's anything to worry about there.
regards,
Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the
chunk size of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's anything to worry about
Hey guyz, thanks for help. I solved the problems. The reason was in
bad query, that i've accidentally committed right after upgrading.
PostgreSQL 8.4 is perfect! Analyze works like a charm, and MUCH better
than in 8.3.
2009/7/31 Kevin Grittner kevin.gritt...@wicourts.gov:
Tom Lane
On 7/30/09 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's
Scott Carey sc...@richrelevance.com writes:
On 7/30/09 2:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Gzip does have some quirky performance behavior depending on the chunk size
of data you stream into it.
Can you enlarge on that comment? I'm not
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the performance
implications of that, but it does seem like it might be something to
look into.
I suspect if
Greg Stark gsst...@mit.edu wrote:
I suspect if this was a problem the zlib people would have added
internal buffering ages ago. I find it hard to believe we're not the
first application to use it this way.
I think that most uses of this library are on entire files or streams.
They may have
On Thu, 30 Jul 2009, Rauan Maemirov wrote:
maintenance_work_mem = 1GB
work_mem = 192MB
shared_buffers = 7680MB
max_connections = 80
My box is Nehalem 2xQuad 2.8 with RAM 32Gb
While it looks like you sorted out your issue downthread, I wanted to
point out that your setting for work_mem could
On Thu, Jul 30, 2009 at 10:10 PM, Greg Smithgsm...@gregsmith.com wrote:
On Thu, 30 Jul 2009, Rauan Maemirov wrote:
maintenance_work_mem = 1GB
work_mem = 192MB
shared_buffers = 7680MB
max_connections = 80
My box is Nehalem 2xQuad 2.8 with RAM 32Gb
While it looks like you sorted out your
51 matches
Mail list logo