Hello everybody,
I'm experiencing a performance related issue during some validation
measurements.
Let's first of all clarify what kind of problem I am facing.
I've set up a plain tpc-h database without any additional indexes or
something like that.
To get some performance impressions I wrote a
Elanchezhiyan Elango elanela...@gmail.com writes:
The problem is that while this makes the checkpoints less frequent, it
accumulates more changes that need to be written to disk during the
checkpoint. Which means the impact more severe.
True. But the checkpoints finish in approximately 5-10
I've been doing a bit of benchmarking and real-world performance
testing, and have found some curious results.
The load in question is a fairly-busy machine hosting a web service that
uses Postgresql as its back end.
Conventional Wisdom is that you want to run an 8k record size to match
Michael van Rooyen mich...@loot.co.za writes:
I'm trying to get to the bottom of a performance issue on a server
running PostgreSQL 9.3.1 on Centos 5.
Hm ... it seems pretty suspicious that all of these examples take just
about exactly 1 second longer than you might expect. I'm wondering
if
On Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen mich...@loot.co.zawrote:
I'm trying to get to the bottom of a performance issue on a server running
PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core Xeon E5620
with 24GB ECC RAM and four enterprise SATA Seagate Constellation ES
On 4/28/2014 1:04 PM, Heikki Linnakangas wrote:
On 04/28/2014 06:47 PM, Karl Denninger wrote:
What I am curious about, however, is the xlog -- that appears to suffer
pretty badly from 128k record size, although it compresses even
more-materially; 1.94x (!)
The files in the xlog directory are
On 04/28/2014 06:47 PM, Karl Denninger wrote:
What I am curious about, however, is the xlog -- that appears to suffer
pretty badly from 128k record size, although it compresses even
more-materially; 1.94x (!)
The files in the xlog directory are large (16MB each) and thus first
blush would be
On 04/28/2014 09:07 PM, Karl Denninger wrote:
The WAL is fsync'd frequently. My guess is that that causes a lot of
extra work to repeatedly recompress the same data, or something like
that.
It shouldn't as ZFS re-writes on change, and what's showing up is not
high I/O*count* but rather
On Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger k...@denninger.net wrote:
On 4/28/2014 1:04 PM, Heikki Linnakangas wrote:
On 04/28/2014 06:47 PM, Karl Denninger wrote:
What I am curious about, however, is the xlog -- that appears to suffer
pretty badly from 128k record size, although it
On 4/28/2014 1:22 PM, Heikki Linnakangas wrote:
On 04/28/2014 09:07 PM, Karl Denninger wrote:
The WAL is fsync'd frequently. My guess is that that causes a lot of
extra work to repeatedly recompress the same data, or something like
that.
It shouldn't as ZFS re-writes on change, and what's
On 4/28/2014 1:26 PM, Jeff Janes wrote:
On Mon, Apr 28, 2014 at 11:07 AM, Karl Denninger k...@denninger.net
mailto:k...@denninger.net wrote:
Isn't WAL essentially sequential writes during normal operation?
Only if you have some sort of non-volatile intermediary, or are
willing to
On 28.4.2014 16:07, Tom Lane wrote:
Elanchezhiyan Elango elanela...@gmail.com writes:
The problem is that while this makes the checkpoints less
frequent, it accumulates more changes that need to be written to
disk during the checkpoint. Which means the impact more severe.
True. But the
On Mon, Apr 28, 2014 at 1:41 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 28.4.2014 16:07, Tom Lane wrote:
Elanchezhiyan Elango elanela...@gmail.com writes:
The problem is that while this makes the checkpoints less
frequent, it accumulates more changes that need to be written to
disk during
Sorry, hit send too early by accident.
On 28.4.2014 16:07, Tom Lane wrote:
Elanchezhiyan Elango elanela...@gmail.com writes:
The problem is that while this makes the checkpoints less
frequent, it accumulates more changes that need to be written to
disk during the checkpoint. Which means the
On 28.4.2014 07:50, Elanchezhiyan Elango wrote:
So how much data in total are we talking about?
OK, so there are multiple tables, and you're updating 50k rows in all
tables in total?
Every 5 minutes: 50K rows are updated in 4 tables. 2K rows are updated
in 39 tables.
Every
On 28.4.2014 22:54, Jeff Janes wrote:
On Mon, Apr 28, 2014 at 1:41 PM, Tomas Vondra t...@fuzzy.cz
There's certainly something fishy, because although this is the supposed
configuration:
checkpoint_segments = 250
checkpoint_timeout = 1h
checkpoint_completion_target = 0.9
On 2014/04/28 07:50 PM, Tom Lane wrote:
Michael van Rooyen mich...@loot.co.za writes:
I'm trying to get to the bottom of a performance issue on a server
running PostgreSQL 9.3.1 on Centos 5.
Hm ... it seems pretty suspicious that all of these examples take just
about exactly 1 second longer
Michael van Rooyen mich...@loot.co.za writes:
On 2014/04/28 07:50 PM, Tom Lane wrote:
Hm ... it seems pretty suspicious that all of these examples take just
about exactly 1 second longer than you might expect. I'm wondering
if there is something sitting on an exclusive table lock somewhere,
18 matches
Mail list logo