(cleaning up my inbox, and bumped into this..)
On 22.04.2010 12:31, Simon Riggs wrote:
On Thu, 2010-04-22 at 12:18 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Thu, 2010-04-22 at 11:56 +0300, Heikki Linnakangas wrote:
If none of the removed heap tuples were present anymore, we
On Thu, Apr 29, 2010 at 4:12 PM, Simon Riggs si...@2ndquadrant.com wrote:
Simple tuning of btree_xlog_vacuum() using an idea I had a while back,
just never implemented. XXX comments removed.
Allows us to avoid reading in blocks during VACUUM replay that are only
required for correctness of
On Apr 29, 2010, at 3:20 PM, Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
Objections to commit?
This is not the time to be hacking stuff like this. You haven't even
demonstrated that there's a significant performance issue here.
I tend to agree that this point of the cycle
Jim Nasby deci...@decibel.org writes:
On Apr 29, 2010, at 3:20 PM, Tom Lane wrote:
This is not the time to be hacking stuff like this. You haven't even
demonstrated that there's a significant performance issue here.
I tend to agree that this point of the cycle isn't a good one to be making
On Mon, 2010-05-17 at 16:10 -0400, Tom Lane wrote:
Jim Nasby deci...@decibel.org writes:
On Apr 29, 2010, at 3:20 PM, Tom Lane wrote:
This is not the time to be hacking stuff like this. You haven't even
demonstrated that there's a significant performance issue here.
I tend to agree
Simon Riggs si...@2ndquadrant.com writes:
Objections to commit?
This is not the time to be hacking stuff like this. You haven't even
demonstrated that there's a significant performance issue here.
regards, tom lane
--
Sent via pgsql-hackers mailing list
On Thu, 2010-04-22 at 10:24 +0300, Heikki Linnakangas wrote:
btree_redo:
case XLOG_BTREE_DELETE:
/*
* Btree delete records can conflict with standby queries. You
* might think that vacuum records would conflict as well, but
*
Simon Riggs wrote:
On Thu, 2010-04-22 at 10:24 +0300, Heikki Linnakangas wrote:
btree_redo:
/*
* Note that if all heap tuples were LP_DEAD then we will be
* returning InvalidTransactionId here. This seems very unlikely
* in practice.
*/
If none of the removed heap
On Thu, 2010-04-22 at 11:28 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Thu, 2010-04-22 at 10:24 +0300, Heikki Linnakangas wrote:
btree_redo:
/*
* Note that if all heap tuples were LP_DEAD then we will be
* returning InvalidTransactionId here. This seems very unlikely
Simon Riggs wrote:
On Thu, 2010-04-22 at 11:28 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Thu, 2010-04-22 at 10:24 +0300, Heikki Linnakangas wrote:
btree_redo:
/*
* Note that if all heap tuples were LP_DEAD then we will be
* returning InvalidTransactionId here. This
On Thu, 2010-04-22 at 11:56 +0300, Heikki Linnakangas wrote:
If none of the removed heap tuples were present anymore, we currently
return InvalidTransactionId, which kills/waits out all read-only
queries. But if none of the tuples were present anymore, the read-only
queries wouldn't have
Simon Riggs wrote:
On Thu, 2010-04-22 at 11:56 +0300, Heikki Linnakangas wrote:
If none of the removed heap tuples were present anymore, we currently
return InvalidTransactionId, which kills/waits out all read-only
queries. But if none of the tuples were present anymore, the read-only
On Thu, 2010-04-22 at 12:18 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
On Thu, 2010-04-22 at 11:56 +0300, Heikki Linnakangas wrote:
If none of the removed heap tuples were present anymore, we currently
return InvalidTransactionId, which kills/waits out all read-only
queries.
On Tue, 2010-04-13 at 17:18 +0300, Heikki Linnakangas wrote:
I've reviewed your changes and they look correct to me; the main chunk
of code is mine and that was tested by me.
Ok, committed after fixing an obsoleted comment other small
editorialization.
Looks good, thanks.
--
Simon
Simon Riggs wrote:
On Thu, 2010-04-08 at 19:02 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
OK, that seems better. I'm happy with that instead.
Have you tested this? Is it ready to commit?
Only very briefly. I think the code is ready, but please review and test
to see I didn't miss
On Tue, 2010-04-06 at 10:22 +0100, Simon Riggs wrote:
Initial patch. I will be testing over next day. No commit before at
least midday on Wed 7 Apr.
Various previous discussions sidelined a very important point: what
exactly does it mean to start recovery from a shutdown checkpoint?
If
Simon Riggs wrote:
On Tue, 2010-04-06 at 10:22 +0100, Simon Riggs wrote:
Initial patch. I will be testing over next day. No commit before at
least midday on Wed 7 Apr.
Various previous discussions sidelined a very important point: what
exactly does it mean to start recovery from a
On Thu, 2010-04-08 at 13:33 +0300, Heikki Linnakangas wrote:
If standby_mode is enabled and there is no source of WAL, then we get a
stream of messages saying
LOG: record with zero length at 0/C88
...
but most importantly we never get to the main recovery loop, so Hot
On Thu, Apr 8, 2010 at 6:16 AM, Simon Riggs si...@2ndquadrant.com wrote:
If standby_mode is enabled and there is no source of WAL, then we get a
stream of messages saying
LOG: record with zero length at 0/C88
...
but most importantly we never get to the main recovery loop, so Hot
Simon Riggs wrote:
In StartupXlog() when we get to the point where we Find the first
record that logically follows the checkpoint, in the current code
ReadRecord() loops forever, spitting out
LOG: record with zero length at 0/C88
...
That prevents us from going further down
On Thu, 2010-04-08 at 18:35 +0300, Heikki Linnakangas wrote:
So I have introduced the new mode (snapshot mode) to enter hot
standby
anyway. That avoids us having to screw around with the loop logic
for
redo. I don't see any need to support the case of where we have no
WAL
source
Simon Riggs wrote:
OK, that seems better. I'm happy with that instead.
Have you tested this? Is it ready to commit?
Only very briefly. I think the code is ready, but please review and test
to see I didn't miss anything.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
On Thu, 2010-04-08 at 19:02 +0300, Heikki Linnakangas wrote:
Simon Riggs wrote:
OK, that seems better. I'm happy with that instead.
Have you tested this? Is it ready to commit?
Only very briefly. I think the code is ready, but please review and test
to see I didn't miss anything.
I'm
On 3/2/10 12:47 PM, Marc Munro wrote:
To take it further still, if vacuum on the master could be prevented
from touching records that are less than max_standby_delay seconds old,
it would be safe to apply WAL from the very latest vacuum. I guess HOT
could be handled similarly though that may
On Tue, 2010-03-02 at 12:47 -0800, Marc Munro wrote:
IIUC this is only a problem for WAL from HOT updates and vacuums. If no
vacuums or HOT updates have been performed, there is no risk of
returning bad data. So WAL that does not contain HOT updates or vacuums
could be applied on the
On Fri, 2010-02-26 at 16:44 -0500, Tom Lane wrote:
Greg Stark gsst...@mit.edu writes:
On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There's *definitely* not going to be enough information in the WAL
stream coming from a master that doesn't think it has HS slaves.
We
On Fri, 2010-02-26 at 03:33 -0500, Greg Smith wrote:
I really hope this discussion can say focused on if and how it's
possible to improve this area, with the goal being to deliver a product
everyone can be proud of with the full feature set that makes this next
release a killer one. The
Dimitri Fontaine wrote:
Bruce Momjian br...@momjian.us writes:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
Well that GUC looks like it
Heikki Linnakangas wrote:
Dimitri Fontaine wrote:
Bruce Momjian br...@momjian.us writes:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
Well
Greg Smith wrote:
Joshua D. Drake wrote:
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark gsst...@mit.edu wrote:
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
+1
I can use any number of other technologies
Greg,
If you think of it in those terms, the idea that you need to run PITR
backup/archive recovery to not get that behavior isn't an important
distinction anymore. If you run SR with the option enabled you could
get it, any other setup and you won't.
+1.
I always expected that we'd get
Josh Berkus wrote:
Now that I think about it, the xmin thing really doesn't seem
conceptually difficult. If the slave just opens a 2nd, special query
connection back to the master and publishes its oldest xmin there, as
far as the master is concerned, it's just another query backend.
Could it
The part I still don't have good visibility on is how much of the
necessary SR infrastructure needed to support this communications
channel is already available in some form. I had though the walsender
on the master was already receiving messages sometimes from the
walreceiver on the
Thank you for combining a small personal attack with a selfish
commentary about how yours is the only valid viewpoint. Saves me a lot
of trouble replying to your messages, can just ignore them instead if
this is how you're going to act.
Hey, take it easy! I read Stark's post as
Josh Berkus wrote:
Hey, take it easy! I read Stark's post as tongue-in-cheek, which I
think it was.
Yeah, I didn't get that. We've already exchanged mutual off-list
apologies for the misunderstanding in both directions, I stopped just
short of sending flowers.
I did kick off this
Greg Smith wrote:
Josh Berkus wrote:
Now that I think about it, the xmin thing really doesn't seem
conceptually difficult. If the slave just opens a 2nd, special query
connection back to the master and publishes its oldest xmin there, as
far as the master is concerned, it's just
Bruce Momjian wrote:
The first option is to connect to the primary server and keep a query
active for as long as needed to run queries on the standby. This
guarantees that a WAL cleanup record is never generated and query
conflicts do not occur, as described above. This could be done using
Greg Stark wrote:
On Sun, Feb 28, 2010 at 5:28 AM, Greg Smith g...@2ndquadrant.com wrote:
The idea of the workaround is that if you have a single long-running query
to execute, and you want to make sure it doesn't get canceled because of a
vacuum cleanup, you just have it connect back to the
Greg Smith wrote:
Attached is a tar file with some test case demo scripts that demonstrate
the worst of the problems here IMHO.
Thanks for that! We've been discussing this for ages, so it's nice to
have a concrete example.
I don't want to belittle that work because it's been important to make
On 26/02/10 08:33, Greg Smith wrote:
There are a number of HS
tunables that interact with one another, and depending your priorities a
few ways you can try to optimize the configuration for what I expect to
be common use cases for this feature.
I've written a blog entry at
On 26/02/10 14:10, Heikki Linnakangas wrote:
Ideally the standby would stash away the old pages or tuples somewhere
so that it can still access them even after replaying the WAL records
that remove them from the main storage. I realize that's not going to
happen any time soon because it's hard
Richard Huxton wrote:
On 26/02/10 08:33, Greg Smith wrote:
I'm not sure what you might be expecting from the above combination, but
what actually happens is that many of the SELECT statements on the table
*that isn't even being updated* are canceled. You see this in the logs:
Hmm - this I'd
On 26/02/10 14:45, Heikki Linnakangas wrote:
Richard Huxton wrote:
On 26/02/10 08:33, Greg Smith wrote:
I'm not sure what you might be expecting from the above combination, but
what actually happens is that many of the SELECT statements on the table
*that isn't even being updated* are
Richard Huxton wrote:
Can we not wait to cancel the transaction until *any* new lock is
attempted though? That should protect all the single-statement
long-running transactions that are already underway. Aggregates etc.
Hmm, that's an interesting thought. You'll still need to somehow tell
the
Replying to my own post - first sign of madness...
Let's see if I've got the concepts clear here, and hopefully my thinking
it through will help others reading the archives.
There are two queues:
1. Cleanup on the master
2. Replay on the slave
Running write queries on the master adds to both
On Fri, Feb 26, 2010 at 10:21 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Richard Huxton wrote:
Can we not wait to cancel the transaction until *any* new lock is
attempted though? That should protect all the single-statement
long-running transactions that are already
Heikki Linnakangas wrote:
How to handle situations where the standby goes away for a while,
such as a network outage, so that it doesn't block the master from ever
cleaning up dead tuples is a concern.
Yeah, that's another issue that needs to be dealt with. You'd probably
need some kind
On 2/26/10 6:57 AM, Richard Huxton wrote:
Can we not wait to cancel the transaction until *any* new lock is
attempted though? That should protect all the single-statement
long-running transactions that are already underway. Aggregates etc.
I like this approach. Is it fragile in some
Greg Stark gsst...@mit.edu writes:
On Fri, Feb 26, 2010 at 7:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I don't see a substantial additional burden there. What I would
imagine is needed is that the slave transmits a single number back
--- its current oldest xmin --- and the walsender process
Well, as Heikki said, a stop-and-go WAL management approach could deal
with that use-case. What I'm concerned about here is the complexity,
reliability, maintainability of trying to interlock WAL application with
slave queries in any sort of fine-grained fashion.
This sounds a bit
Greg Stark gsst...@mit.edu writes:
Why shouldn't it have any queries at walreceiver startup? It has any
xlog segments that were copied from the master and any it can find in
the archive, it could easily reach a consistent point long before it
needs to connect to the master. If you really want
bruce wrote:
4 The standby waiting longer than max_standby_delay to acquire a
...
#4 can be controlled by max_standby_delay, where a large value only
delays playback during crash recovery --- again, a rare occurance.
One interesting feature is that max_standby_delay will _only_ delay
Tom Lane t...@sss.pgh.pa.us writes:
Well, as Heikki said, a stop-and-go WAL management approach could deal
with that use-case. What I'm concerned about here is the complexity,
reliability, maintainability of trying to interlock WAL application with
slave queries in any sort of fine-grained
Greg Stark gsst...@mit.edu writes:
On Fri, Feb 26, 2010 at 9:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
There's *definitely* not going to be enough information in the WAL
stream coming from a master that doesn't think it has HS slaves.
We can't afford to record all that extra stuff in
Dimitri Fontaine wrote:
Tom Lane t...@sss.pgh.pa.us writes:
Well, as Heikki said, a stop-and-go WAL management approach could deal
with that use-case. What I'm concerned about here is the complexity,
reliability, maintainability of trying to interlock WAL application with
slave queries
Bruce Momjian br...@momjian.us writes:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
Well that GUC looks like it allows to set a compromise
Dimitri Fontaine wrote:
Bruce Momjian br...@momjian.us writes:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
Well that GUC looks like it
Bruce Momjian wrote:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
The first issue is that you're basically saying I don't care about high
Bruce Momjian wrote:
5 Early cleanup of data still visible to the current query's
snapshot
#5 could be handled by using vacuum_defer_cleanup_age on the master.
Why is vacuum_defer_cleanup_age not listed in postgresql.conf?
I noticed that myself and fired off a
Greg Smith wrote:
Bruce Momjian wrote:
Doesn't the system already adjust the delay based on the length of slave
transactions, e.g. max_standby_delay. It seems there is no need for a
user switch --- just max_standby_delay really high.
The first issue is that you're basically saying
On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith g...@2ndquadrant.com wrote:
This is also the reason why the whole pause recovery idea is a fruitless
path to wander down. The whole point of this feature is that people have a
secondary server available for high-availability, *first and foremost*,
Greg Smith wrote:
You can think of the idea of passing an xmin back from the standby as
being like an auto-tuning vacuum_defer_cleanup_age. It's 0 when no
standby queries are running, but grows in size to match longer ones. And
you don't have to have to know anything to set it correctly;
Greg Stark wrote:
Well you can go sit in the same corner as Simon with your high
availability servers.
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
Thank you for combining a small personal attack with a selfish
On Sat, Feb 27, 2010 at 1:53 AM, Greg Smith g...@2ndquadrant.com wrote:
Greg Stark wrote:
Well you can go sit in the same corner as Simon with your high
availability servers.
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
Bruce Momjian wrote:
Well, I think the choice is either you delay vacuum on the master for 8
hours or pile up 8 hours of WAL files on the slave, and delay
application, and make recovery much slower. It is not clear to me which
option a user would prefer because the bloat on the master might be
Heikki Linnakangas wrote:
One such landmine is that the keepalives need to flow from client to
server while the WAL records are flowing from server to client. We'll
have to crack that problem for synchronous replication too, but I think
that alone is a big enough problem to make this 9.1
Greg Smith wrote:
Heikki Linnakangas wrote:
One such landmine is that the keepalives need to flow from client to
server while the WAL records are flowing from server to client. We'll
have to crack that problem for synchronous replication too, but I think
that alone is a big enough problem
Greg Stark wrote:
Eh? That's not what I meant at all. Actually it's kind of the exact
opposite of what I meant.
Sorry about that--I think we just hit one of those language usage drift
bits of confusion. Sit in the corner has a very negative tone to it
in US English and I interpreted your
On Sat, Feb 27, 2010 at 2:43 AM, Greg Smith g...@2ndquadrant.com wrote:
But if you're running the 8 hour report on the master right now, aren't you
already exposed to a similar pile of bloat issues while it's going? If I
have the choice between sometimes queries will get canceled vs.
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark gsst...@mit.edu wrote:
On Fri, Feb 26, 2010 at 11:56 PM, Greg Smith g...@2ndquadrant.com
wrote:
This is also the reason why the whole pause recovery idea is a
fruitless
path to wander down. The whole point of this feature is that people
have a
Joshua D. Drake wrote:
On Sat, 27 Feb 2010 00:43:48 +, Greg Stark gsst...@mit.edu wrote:
I want my ability to run large batch queries without any performance
or reliability impact on the primary server.
+1
I can use any number of other technologies for high availability.
Greg Stark wrote:
But if they move from having a plain old PITR warm standby to having
one they can run queries on they might well assume that the big
advantage of having the standby to play with is precisely that they
can do things there that they have never been able to do on the master
* Greg Smith g...@2ndquadrant.com [100226 23:39]:
Just not having the actual query running on the master is such a
reduction in damage that I think it's delivering the essence of what
people are looking for regardless. That it might be possible in some
cases to additionally avoid the
Aidan Van Dyk wrote:
Would we (ya, the royal we) be willing to say that if you want the
benifit of removing the MVCC overhead of long-running queries you need
to run PITR backup/archive recovery, and if you want SR, you get a
closed-loop master-follows-save-xmin behaviour?
To turn that
Markus Wanner wrote:
Bruce,
Bruce Momjian wrote:
Ah, I now realize it only mentions warm standby, not hot, so I just
updated the documentation to reflect that; you can see it here:
Maybe the table below also needs an update, because unlike Warm Standby
using PITR, a hot standby
On Mon, Feb 8, 2010 at 10:34 PM, Bruce Momjian br...@momjian.us wrote:
Ahh, good point. I had not considered the table would change. What I
did was to mark Slaves accept read-only queries as Hot only.
Can the warm standby still reside in v9.0? If not, the mark of
Hot only seems odd for me.
Fujii Masao wrote:
On Mon, Feb 8, 2010 at 10:34 PM, Bruce Momjian br...@momjian.us wrote:
Ahh, good point. ?I had not considered the table would change. ?What I
did was to mark Slaves accept read-only queries as Hot only.
Can the warm standby still reside in v9.0? If not, the mark of
Hot
Bruce,
Bruce Momjian wrote:
Ah, I now realize it only mentions warm standby, not hot, so I just
updated the documentation to reflect that; you can see it here:
Maybe the table below also needs an update, because unlike Warm Standby
using PITR, a hot standby accepts read-only queries and can
On Sun, Feb 7, 2010 at 4:41 AM, Markus Wanner mar...@bluegap.ch wrote:
Bruce Momjian wrote:
Do we want to call the feature hot standby? Is a read-only standby a
standby or a slave?
I think hot standby is pretty much the term, now.
See here for the previous iteration of this discussion:
On Sat, 2010-02-06 at 17:32 +0100, Andres Freund wrote:
So it seems at least the behavior is quite different from what the
docs stats. Am I missing something here?
Its a small bug/typo in standby.c:ResolveRecoveryConflictWithDatabase
The line:
CancelDBBackends(dbid,
I've always thought this feature was misnamed and nothing has happened
to change my mind, but it's not clear whether I'm in the majority.
I'm afraid force of habit is more powerful than correctness on this one.
It's going to be HS/SR whether that's perfectly correct or not.
--Josh Berkus
On Feb 7, 2010, at 12:35 PM, Josh Berkus wrote:
I've always thought this feature was misnamed and nothing has happened
to change my mind, but it's not clear whether I'm in the majority.
I'm afraid force of habit is more powerful than correctness on this one.
It's going to be HS/SR whether
On Saturday 06 February 2010 02:25:33 Tatsuo Ishii wrote:
Hi,
While testing Hot Standby, I have encountered strange behavior with
DROP DATABASE command.
1) connect to test database at standby via psql
2) issue DROP DATABASE test command to primary
3) session #1 works fine
4) close
On Saturday 06 February 2010 17:32:43 Andres Freund wrote:
On Saturday 06 February 2010 02:25:33 Tatsuo Ishii wrote:
Hi,
While testing Hot Standby, I have encountered strange behavior with
DROP DATABASE command.
1) connect to test database at standby via psql
2) issue DROP
Joshua Tolley wrote:
-- Start of PGP signed section.
Having concluded I really need to start playing with hot standby, I started
looking for documentation on the subject. I found what I was looking for; I
also found this page[1], which, it seems, ought to mention hot standby.
Comments?
[1]
On Tue, 2010-02-02 at 20:27 +0200, Heikki Linnakangas wrote:
I'd appreciate it if you could review the relation-specific conflict
patch, 'cos it's still important.
One fundamental gripe I have about that approach is that it's hard to
predict when you will be saved by the cache and when
I wrote:
* We can not change the toast rel OID of a shared catalog -- there's no
way to propagate that into the other copies of pg_class. So we need to
rejigger the logic for heap rewriting a little bit. Toast rel swapping
has to be handled by swapping their relfilenodes not their OIDs.
Tom Lane wrote:
I've been playing around with different alternatives for solving the
problem of toast-pointer OIDs, but I keep coming back to the above as
being the least invasive and most robust answer. There are two basic
ways that we could do it: pass the OID to use to the toast logic,
On Wed, 2010-02-03 at 11:50 -0500, Tom Lane wrote:
I've concluded that that's too large a change to undertake for 9.0
The purpose of this was to make the big changes in 9.0. If we aren't
going to do that it seems like we shouldn't bother at all.
So why not flip back to the easier approach of
Simon Riggs si...@2ndquadrant.com writes:
On Wed, 2010-02-03 at 11:50 -0500, Tom Lane wrote:
I've concluded that that's too large a change to undertake for 9.0
The purpose of this was to make the big changes in 9.0. If we aren't
going to do that it seems like we shouldn't bother at all.
No,
Simon Riggs wrote:
On Fri, 2010-01-29 at 15:01 +, Simon Riggs wrote:
Putting it back takes time and
given enough of that rare cloth, it will eventually be put back.
Looks like I'll have time to add the starts-at-shutdown-checkpoint item
back in after all.
Great! Thank you, much
Simon Riggs wrote:
The way this would work is if Startup waits on a buffer pin we
immediately send out a request to all backends to cancel themselves if
they are holding the buffer pin required waiting on a lock. We then
sleep until max_standby_delay. When max_standby_delay = -1 we only sleep
Tom Lane wrote:
Hm ... do we want an LWLock per map file, or is one lock to rule them all
sufficient? LWLock per database seems problematic. With an HW lock there
wouldn't be a problem with that. HW lock would allow concurrent updates of
the map files of different DBs, but is that worth the
On Sun, 2010-01-31 at 22:49 -0500, Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
I'll do a little work towards step (1) just so we can take a more
informed view once you've had a better look at just what (2) involves.
I spent a couple of hours reading code and believe that I've
On Mon, Feb 1, 2010 at 8:54 AM, Simon Riggs si...@2ndquadrant.com wrote:
Disallow catalog relocation inside subtransactions, to avoid having
to handle subxact abort effects on the local-map-changes state.
This could be implemented if desired, but doesn't seem worth it
at least in first pass.
On Mon, 2010-02-01 at 09:40 +0200, Heikki Linnakangas wrote:
Simon Riggs wrote:
The way this would work is if Startup waits on a buffer pin we
immediately send out a request to all backends to cancel themselves if
they are holding the buffer pin required waiting on a lock. We then
sleep
Greg Stark gsst...@mit.edu writes:
On Mon, Feb 1, 2010 at 8:54 AM, Simon Riggs si...@2ndquadrant.com wrote:
Disallow catalog relocation inside subtransactions, to avoid having
to handle subxact abort effects on the local-map-changes state.
This could be implemented if desired, but doesn't seem
On Mon, 2010-02-01 at 10:06 -0500, Tom Lane wrote:
the assumption that the file is less than one disk block,
it should be just as atomic as pg_control updates are.
IIRC there were 173 relations affected by this. 4 bytes each we would
have more than 512 bytes.
ISTM you need to treat some of
Simon Riggs si...@2ndquadrant.com writes:
On Mon, 2010-02-01 at 10:06 -0500, Tom Lane wrote:
the assumption that the file is less than one disk block,
it should be just as atomic as pg_control updates are.
IIRC there were 173 relations affected by this. 4 bytes each we would
have more than
Tom Lane wrote:
That seems too fragile to me, as I don't find it a stretch at all to
think that writing the map file might fail --- just think Windows
antivirus code :-(. Now, once we have written the WAL record for
the mapfile change, we can't really afford a failure in my approach
either.
101 - 200 of 968 matches
Mail list logo