Hi,
Please find my answers inline
Do you have evidence of that contention being so worse, that it
justifies the additional WAL reading from disk? (Assuming no WAL
archiving).
On a broader sense, DSM is a bitmap index with some optimization that has
been placed to make the updates more
One more application of the same is Asynchronous Materialized views. I hope
you agree that the asynchronous materialized views have to get updated only
through WAL. If WAL can be used for that purpose, why can't we multiplex it?
Thanks,
Gokul.
Well, one of the principal arguments for having VACUUM at all is that it
off-loads required maintenance effort from foreground transaction code
paths. I'm not really going to be in favor of solutions that put more
work into the transaction code paths (HOT already did more of that than
I
Gokulakannan Somasundaram wrote:
Well, one of the principal arguments for having VACUUM at all is that it
off-loads required maintenance effort from foreground transaction code
paths. I'm not really going to be in favor of solutions that put more
work into the transaction code paths (HOT
Heikki Linnakangas escribió:
Another issue is that reading WAL is inherently not very scalable. There's
only one WAL for the whole cluster, and it needs to be read sequentially,
so it can easily become a bottleneck on large systems.
I have wondered why do we do it this way. Is there a
I haven't been paying close attention to this thread, but there is a
couple general issues with using the WAL for this kind of things. First
of all, one extremely cool feature of PostgreSQL is that transaction
size is not limited by WAL space, unlike on many other DBMSs. I think
many of the
Alvaro Herrera [EMAIL PROTECTED] writes:
Heikki Linnakangas escribió:
Another issue is that reading WAL is inherently not very scalable. There's
only one WAL for the whole cluster, and it needs to be read sequentially,
so it can easily become a bottleneck on large systems.
I have wondered
Tom Lane escribió:
Alvaro Herrera [EMAIL PROTECTED] writes:
Heikki Linnakangas escribi�:
Another issue is that reading WAL is inherently not very scalable. There's
only one WAL for the whole cluster, and it needs to be read sequentially,
so it can easily become a bottleneck on large
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane escribió:
It would only be useful to have one per spindle-dedicated-to-WAL, so
tying the division to databases doesn't seem like it'd be a good idea.
Keep in mind that there are claims that a write-cache-enabled
battery-backed RAID
Tom Lane escribió:
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane escribió:
It would only be useful to have one per spindle-dedicated-to-WAL, so
tying the division to databases doesn't seem like it'd be a good idea.
Keep in mind that there are claims that a write-cache-enabled
Tom Lane [EMAIL PROTECTED] writes:
Alvaro Herrera [EMAIL PROTECTED] writes:
Heikki Linnakangas escribió:
Another issue is that reading WAL is inherently not very scalable. There's
only one WAL for the whole cluster, and it needs to be read sequentially,
so it can easily become a bottleneck
On Jan 16, 2008 6:12 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
Tom Lane escribió:
Possibly true, but if that's the underlying hardware then there's no
performance benefit in breaking WAL up at all, no?
Selective PITR shipping.
If it was possible to launch a PITR only on a given database,
On Wed, 16 Jan 2008, Alvaro Herrera wrote:
Keep in mind that there are claims that a write-cache-enabled
battery-backed RAID controller negates the effect of a separate spindle.
Negates is a bit strong; there's still some performance advantage on
systems that write a serious amount of data.
On Wed, Jan 16, 2008 at 11:40 AM, in message
[EMAIL PROTECTED], Greg Smith
[EMAIL PROTECTED] wrote:
On Wed, 16 Jan 2008, Alvaro Herrera wrote:
Keep in mind that there are claims that a write-cache-enabled
battery-backed RAID controller negates the effect of a separate spindle.
Negates is
Guillaume Smet wrote:
On Jan 16, 2008 6:12 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
Tom Lane escribió:
Possibly true, but if that's the underlying hardware then there's no
performance benefit in breaking WAL up at all, no?
Selective PITR shipping.
If it was possible to launch a PITR only
For more usefulness, we'd need to keep databases more separate from each
other than we do now. Databases would need to have their own transaction
counters, for example. Shared relations would obviously need major
changes for that to work. If we ultimately could separate databases so
that
On Jan 16, 2008 7:41 PM, Heikki Linnakangas [EMAIL PROTECTED] wrote:
I don't think it's going to work too well, though, not without major
changes at least.
Well, I know it's really not doable with the current behaviour of WAL.
I just wanted to point this feature request because we had it a few
Heikki Linnakangas escribió:
I don't think it's going to work too well, though, not without major
changes at least. What would happen when you restore a PITR backup of just
one database? Would the other databases still be there in the restored
cluster? What state would they be in? After
Alvaro Herrera wrote:
Heikki Linnakangas escribió:
For more usefulness, we'd need to keep databases more separate from each
other than we do now. Databases would need to have their own transaction
counters, for example.
Hmm, why? Perhaps you are right but I don't see the reason.
If each
On Wed, 16 Jan 2008, Kevin Grittner wrote:
I haven't seen any benchmarks on the list or in our environment
where the separate spindles gave more than a 1% increase in
performance when using a good-quality BBC controller.
Well, even 1% isn't nothing, which is the main point I was making--it
Hi,
Gokulakannan Somasundaram wrote:
I'm also not sure it really buys us anything over having a second
dead-space-map data structure. The WAL is much larger and serves other
purposes which would limit what we can do with it.
Ok. One obvious advantage is that it saves the contention
Markus Schiltknecht [EMAIL PROTECTED] writes:
Since Vacuum process is going to
have much more information on what has happened in the database,
Why should that be? IMO, collecting the information at transaction time
can give you exactly the same information, if not more or better
Hi,
Tom Lane wrote:
Well, one of the principal arguments for having VACUUM at all is that it
off-loads required maintenance effort from foreground transaction code
paths.
Off-loading doesn't mean we don't have to do the work, so it's obviously
is a compromise.
AFAICT, having to write some
Sorry Greg , I missed to read this part before.
On Jan 9, 2008 8:40 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Markus Schiltknecht [EMAIL PROTECTED] writes:
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan the WAL log, it can get all
the
relevant
Markus,
I was re-thinking about what you said. I feel, if we read the WAL
through archiver(Where the archiver is switched on), which anyway reads the
entire WAL Log, it might save some CPU cycles off updates, inserts and
deletes.
The question is about reducing I/Os and i have no
Hi,
Gokulakannan Somasundaram wrote:
But i am just thinking of creating the DSM
by reading through the WAL Logs, instead of asking the Inserts, updates
and deletes to do the DSM creation.
What's the advantage of that? What's wrong with collecting the
information for DSM at transaction
On Jan 10, 2008 3:43 PM, Markus Schiltknecht [EMAIL PROTECTED] wrote:
Hi,
Gokulakannan Somasundaram wrote:
But i am just thinking of creating the DSM
by reading through the WAL Logs, instead of asking the Inserts, updates
and deletes to do the DSM creation.
What's the advantage of
Hi,
Gokulakannan Somasundaram wrote:
because of the contention. Am i missing something
here? While Vacuum is reading the DSM, operations may not be able to
update the bits. We need to put the DSM in shared memory, if all the
processes are going to update it, whereas if Vacuum is going to form
Hi,
May be i am reposting something which has been discussed to end in this
forum. I have made a search in the archives and i couldn't find any
immediately.
With my relatively small experience in Performance Testing and Tuning,
one of the rules of thumb for getting Performance is Don't do
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan
the WAL log, it can get all the relevant details on where it needs to
go.
You seem to be assuming that only few tuples have changed between
vacuums, so that WAL could quickly guide the VACUUM processes to the
So it's easily possible having more dead tuples, than live ones. In such
cases, scanning the WAL can easily takes *longer* than scanning the
table, because the amount of WAL to read would be bigger.
Yes... i made a wrong assumption there.. so the idea is totally
useless.
Thanks,
Gokul.
Markus Schiltknecht [EMAIL PROTECTED] writes:
Hi,
Gokulakannan Somasundaram wrote:
If we can ask the Vacuum process to scan the WAL log, it can get all the
relevant details on where it needs to go.
That's an interesting thought. I think your caveats are right but with some
more work it
On Wed, 2008-01-09 at 15:10 +, Gregory Stark wrote:
The goal should be to improve vacuum, then
adjust the autovacuum_scale_factor as low as we can. As vacuum gets
cheaper the scale factor can go lower and lower. We shouldn't allow
the existing autovacuum behaviour to control the way
Hi,
Gregory Stark wrote:
That's an interesting thought. I think your caveats are right but with some
more work it might be possible to work it out. For example if a background
process processed the WAL and accumulated an array of possibly-dead tuples to
process in batch. It would wait whenever
34 matches
Mail list logo