Hi,
Alvaro Herrera wrote:
Simon Riggs wrote:
ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
I also see similarity to
Note that even though the processor is 99% in wait state the drive
is
only handling about 3 MB/s. That translates into a seek time of
2.2ms
which is actually pretty fast...But note that if this were a raid
array
Postgres's wouldn't be getting any better results. A Raid array
wouldn't
Ühel kenal päeval, R, 2007-12-14 kell 10:39, kirjutas Markus
Schiltknecht:
Hi,
(For parallelized queries, superuser privileges might appear wrong, but
I'm arguing that parallelizing the rights checking isn't worth the
trouble, so the initiating worker backend should do that and only
Hannu Krosing wrote:
until N fubbers used
..whatever a fubber is :-)
Nice typo!
Markus
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On Fri, 2007-12-14 at 10:51 +0100, Zeugswetter Andreas ADI SD wrote:
The problem is not writes but reads.
That's what I see.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
---(end of broadcast)---
TIP 1: if posting/reading through
Hello Hannu,
Hannu Krosing wrote:
(For parallelized queries, superuser privileges might appear wrong, but
I'm arguing that parallelizing the rights checking isn't worth the
trouble, so the initiating worker backend should do that and only
delegate safe jobs to hepler backends. Or is that a
Ühel kenal päeval, N, 2007-12-13 kell 20:25, kirjutas Heikki
Linnakangas:
...
Hmm. That assumes that nothing else than the WAL replay will read
pages into shared buffers. I guess that's true at the moment, but it
doesn't seem impossible that something like Florian's read-only queries
on a
On Fri, 14 Dec 2007, Zeugswetter Andreas ADI SD wrote:
I don't follow. The problem is not writes but reads. And if the reads
are random enough no cache controller will help.
The specific example Tom was running was, in his words, 100% disk write
bound. I was commenting on why I thought that
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Joshua D. Drake [EMAIL PROTECTED] writes:
Exactly. Which is the point I am making. Five minutes of transactions
is nothing (speaking generally).. In short, if we are in recovery, and
we are not
Simon Riggs [EMAIL PROTECTED] writes:
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
Heikki proposed a while back to use posix_fadvise() when processing logs to
read-ahead blocks which the recover will need before actually attempting to
recover them. On a raid array that would bring
On Thu, 2007-12-13 at 09:45 +, Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
Heikki proposed a while back to use posix_fadvise() when processing logs to
read-ahead blocks which the recover will need before actually
Simon Riggs [EMAIL PROTECTED] writes:
We would have readbuffers in shared memory, like wal_buffers in reverse.
Each worker would read the next WAL record and check there is no
conflict with other concurrent WAL records. If not, it will apply the
record immediately, otherwise wait for the
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
We would have readbuffers in shared memory, like wal_buffers in reverse.
Each worker would read the next WAL record and check there is no
conflict with other concurrent WAL records. If not, it will apply the
record immediately,
Simon,
On Dec 13, 2007 11:21 AM, Simon Riggs [EMAIL PROTECTED] wrote:
Anyway, I'll leave this now, since I think we need to do Florian's work
first either way and that is much more eagerly awaited I think.
Speaking of that, is there any news about it and about Florian? It was
a really
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It's a good idea, but it will require more complex code. I prefer the
simpler solution of using more processes to solve the I/O problem.
Huh, I forgot about that idea. Ironically that was what I suggested when
Heikki described
Simon Riggs wrote:
ISTM its just autovacuum launcher + Hot Standby mixed.
I don't think you need a launcher at all. Just get the postmaster to
start a configurable number of wal-replay processes (currently the
number is hardcoded to 1).
--
Alvaro Herrera
On Thu, 2007-12-13 at 12:28 +, Heikki Linnakangas wrote:
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
We would have readbuffers in shared memory, like wal_buffers in reverse.
Each worker would read the next WAL record and check there is no
conflict with other
On Thu, 2007-12-13 at 10:18 -0300, Alvaro Herrera wrote:
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It's a good idea, but it will require more complex code. I prefer the
simpler solution of using more processes to solve the I/O problem.
Huh, I forgot about that
Simon Riggs wrote:
Allocate a recovery cache of size maintenance_work_mem that goes away
when recovery ends.
For every block mentioned in WAL record that isn't an overwrite, first
check shared_buffers. If its in shared_buffers apply immediately and
move on. If not in shared_buffers then put in
On Thu, 2007-12-13 at 20:25 +, Heikki Linnakangas wrote:
Simon Riggs wrote:
Allocate a recovery cache of size maintenance_work_mem that goes away
when recovery ends.
For every block mentioned in WAL record that isn't an overwrite, first
check shared_buffers. If its in
On Thu, 2007-12-13 at 21:13 +, Simon Riggs wrote:
Of course if we scan that far ahead we can start removing aborted
transactions also, which is the more standard optimization of
recovery.
Recall that thought!
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
Heikki Linnakangas [EMAIL PROTECTED] writes:
Hmm. That assumes that nothing else than the WAL replay will read
pages into shared buffers. I guess that's true at the moment, but it
doesn't seem impossible that something like Florian's read-only queries
on a stand by server would change that.
On Thu, 13 Dec 2007, Gregory Stark wrote:
Note that even though the processor is 99% in wait state the drive is
only handling about 3 MB/s. That translates into a seek time of 2.2ms
which is actually pretty fast...But note that if this were a raid array
Postgres's wouldn't be getting any
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, 13 Dec 2007 11:12:26 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
Hmm --- I was testing a straight crash-recovery scenario, not
restoring from archive. Are you sure your restore_command script
isn't responsible for a lot of the
On Thu, 2007-12-13 at 16:41 -0500, Tom Lane wrote:
Recovery is inherently one of the least-exercised parts of the system,
and it gets more so with each robustness improvement we make elsewhere.
Moreover, because it is fairly dumb, anything that does go wrong will
likely result in silent data
Tom Lane wrote:
Also, I have not seen anyone provide a very credible argument why
we should spend a lot of effort on optimizing a part of the system
that is so little-exercised. Don't tell me about warm standby
systems --- they are fine as long as recovery is at least as fast
as the original
Heikki Linnakangas [EMAIL PROTECTED] writes:
Koichi showed me Simon graphs of DBT-2 runs in their test lab back in
May. They had setup two identical systems, one running the benchmark,
and another one as a warm stand-by. The stand-by couldn't keep up; it
couldn't replay the WAL as quickly
Heikki Linnakangas [EMAIL PROTECTED] writes:
It would be interesting to do something like that to speed up replay of long
PITR archives, though. You could scan all (or at least far ahead) the WAL
records, and make note of where there is full page writes for each page.
Whenever there's a full
Tom,
[ shrug... ] This is not consistent with my experience. I can't help
suspecting misconfiguration; perhaps shared_buffers much smaller on the
backup, for example.
You're only going to see it on SMP systems which have a high degree of CPU
utilization. That is, when you have 16 cores
Josh Berkus [EMAIL PROTECTED] writes:
Tom,
[ shrug... ] This is not consistent with my experience. I can't help
suspecting misconfiguration; perhaps shared_buffers much smaller on the
backup, for example.
You're only going to see it on SMP systems which have a high degree of CPU
Tom Lane [EMAIL PROTECTED] writes:
The argument that Heikki actually made was that multiple parallel
queries could use more of the I/O bandwidth of a multi-disk array
than recovery could. Which I believe, but I question how much of a
real-world problem it is. For it to be an issue, you'd
31 matches
Mail list logo