Hi,
when regularly collecting resetting query information from pg_stat_statements
it’s possible to trigger a situation where unnormalised queries are stored.
I think what happens is the following:
pgss_post_parse_analyse calls pgss_store with a non-null jstate which will
cause the query
Hi,
apparently a few users were puzzled that archive_command is ignored on slave
servers, which comes as a surprise since streaming replication will work fine
from slaves and as far as I’ve checked the documentation also doesn’t point out
the fact that archive_command gets a different
While preparing a replication test setup with 9.0beta1 I noticed strange
page allocation patterns which Andrew Gierth found interesting enough to
report here.
I've written a simple tool to generate traffic on a database [1], which
did about 30 TX/inserts per second to a table. Upon inspecting
On 16.05.2010 02:16, Tom Lane wrote:
Michael Rennermichael.ren...@amd.co.at writes:
I've written a simple tool to generate traffic on a database [1], which
did about 30 TX/inserts per second to a table. Upon inspecting the data
in the table, I noticed the expected grouping of tuples which came
David Fetter wrote:
Folks,
As we move forward, we run into increasingly complex situations under
the general rubric of concurrency.
What test frameworks are already out there that we can use in our
regression test suite? If there aren't any, how might we build one?
Not entirely on-topic,
Greg Smith wrote:
On Wed, 7 Oct 2009, Michael Renner wrote:
I haven't thought about result aggregation rendering/UI part of the
whole thing so far, so if anyone has some ideas in that direction
they'd be very much appreciated when the time has come.
What I did in pgbench-tools (now
Bruce Momjian wrote:
Michael Renner wrote:
Hi,
this is a small update to the first paragraph of the WAL configuration
chapter, going into more detail WRT redo vs. checkpoint records, since
the underlying behavior is currently only deducible from the source. I'm
not perfectly sure if I got
Hi,
small patch for the documentation describing the current pg_start_backup
checkpoint behavior as per
http://archives.postgresql.org//pgsql-general/2008-09/msg01124.php .
Should we note down a TODO to revisit the current checkpoint handling?
best regards,
Michael
diff --git
regards,
Michael Renner
diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml
index cff6fde..69b8b0a 100644
--- a/doc/src/sgml/wal.sgml
+++ b/doc/src/sgml/wal.sgml
@@ -322,19 +322,24 @@
/para
para
- firsttermCheckpoints/firsttermindextermprimarycheckpoint//
- are points in the sequence
Hi,
the comment WRT WAL recovery and FS journals [1] is a bit misleading in
it's current form.
First, none of the general purpose filesystems I've seen so far do data
journalling per default, since it's a huge performance penalty, even for
non-RDBMS workloads. The feature you talk about is ext3
Greg Smith wrote:
The drives themselves, and possibly the OS and disk controller, are all
running read-ahead algorithms to accelerate this case. In fact, this
*exact* case for the Linux read-ahead stuff that just went mainline
recently: http://kerneltrap.org/node/6642
Apparently only the
Gregory Stark schrieb:
Te reason I'm wondering about this is it seems out of line with raw i/o
numbers. Typical values for consumer drives are about a sustained throughput
of 60MB/s ( Ie .2ms per 8k) and seek latency of 4ms. That gives a ratio of 20.
Server-class drives have even a ratio since
Gregory Stark schrieb:
But with your numbers things look even weirder. With a 90MB/s sequential speed
(91us) and 9ms seek latency that would be a random_page_cost of nearly 100!
Looks good :). If you actually want to base something on Real World
numbers I'd suggest that we collect them
13 matches
Mail list logo