Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-08 Thread Robert Haas
On Fri, Oct 8, 2010 at 12:29 PM, Josh Berkus  wrote:
> On 10/07/2010 06:38 PM, Robert Haas wrote:
>>
>> Yes, let's please just implement something simple and get it
>> committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
>> and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
>> change it per txn.  Done.  We can revise it *the day after it's
>> committed* if we agree on how.  And if we*don't*  agree, then we can
>> ship it and we still win.
>
> If we have all this code, and it appears that we do, +1 to commit it now so
> that we can start testing.

To the best of my knowledge we don't have exactly that thing, but it
seems like either of the two patches on the table could probably be
beaten into that shape with a large mallet in fairly short order, and
I think we should pick one of them and do just that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-08 Thread Josh Berkus

On 10/07/2010 06:38 PM, Robert Haas wrote:

Yes, let's please just implement something simple and get it
committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
change it per txn.  Done.  We can revise it *the day after it's
committed* if we agree on how.  And if we*don't*  agree, then we can
ship it and we still win.


If we have all this code, and it appears that we do, +1 to commit it now 
so that we can start testing.


--
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-08 Thread Robert Haas
On Fri, Oct 8, 2010 at 4:29 AM, Yeb Havinga  wrote:
> Robert Haas wrote:
>>
>> Yes, let's please just implement something simple and get it
>> committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
>> and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
>> change it per txn.  Done.  We can revise it *the day after it's
>> committed* if we agree on how.  And if we *don't* agree, then we can
>> ship it and we still win.
>>
>
> I like the idea of something simple committed first, and am trying to
> understand what's said above.
>
> k = 1 : wait for only one ack
> two gucs: does this mean configurable in postgresql.conf at the master, and
> changable with SET commands on the master depending on options? Are both
> gucs mutable?
> synchronous_standbys: I'm wondering if this registration is necessary in
> this simple setup. What are the named used for? Could they be removed?
> Should they also be configured at each standby?
> synchronous_waitfor: If configured on the master, how is it updated to the
> standbys? What does being able to configure 'none' mean? k = 0? I smell a
> POLA violation here.

Well, there's got to be some way to turn synchronous replication off.
The obvious methods are to allow synchronous_standbys to be set to
empty or to allow synchronous_waitfor to be set to none.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-08 Thread Yeb Havinga

Robert Haas wrote:

Yes, let's please just implement something simple and get it
committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
change it per txn.  Done.  We can revise it *the day after it's
committed* if we agree on how.  And if we *don't* agree, then we can
ship it and we still win.
  
I like the idea of something simple committed first, and am trying to 
understand what's said above.


k = 1 : wait for only one ack
two gucs: does this mean configurable in postgresql.conf at the master, 
and changable with SET commands on the master depending on options? Are 
both gucs mutable?
synchronous_standbys: I'm wondering if this registration is necessary in 
this simple setup. What are the named used for? Could they be removed? 
Should they also be configured at each standby?
synchronous_waitfor: If configured on the master, how is it updated to 
the standbys? What does being able to configure 'none' mean? k = 0? I 
smell a POLA violation here.


regards
Yeb Havinga


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Fujii Masao
On Fri, Oct 8, 2010 at 10:38 AM, Robert Haas  wrote:
> Yes, let's please just implement something simple and get it
> committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
> and synchronous_waitfor = none|recv|fsync|apply)

For my cases, I'm OK with this as the first commit, for now.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Heikki Linnakangas

On 07.10.2010 21:33, Josh Berkus wrote:

1) This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added by this approach
vs. not having a list on the master at all?


It's just one GUC. Without the list, there would have to be at least a 
boolean option to enable/disable it.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Heikki Linnakangas

On 07.10.2010 23:56, Greg Stark wrote:

On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
  wrote:

The standby name is a GUC in the standby's configuration file:

standby_name='bostonserver'



Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file.


This proposal allows that. At least assuming you have a simple setup of 
one master and N standbys, and you're happy with a reply from any 
standby, as opposed to all standbys. You just set both standby_name and 
synchronous_standby GUCS to 'foo' in all servers, and you're done.


You'll need to point each standby's primary_conninfo setting to the 
current master, though, but that's no different from the situation today 
with asynchronous replication. Presumably you'll have a virtual IP 
address or host name that always points to the current master, also used 
by the actual applications connecting to the database.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Robert Haas
On Thu, Oct 7, 2010 at 7:15 PM, Greg Smith  wrote:
> Josh Berkus wrote:
>>
>> This version of Standby Registration seems to add One More Damn Place
>> You Need To Configure Standby (OMDPYNTCS) without adding any
>> functionality you couldn't get *without* having a list on the master.
>> Can someone explain to me what functionality is added by this approach
>> vs. not having a list on the master at all?
>>
>
> That little design outline I threw out there wasn't intended to be a plan
> for right way to proceed here.  What I was trying to do is point out the
> minimum needed that would actually work for the use cases people want the
> most, to shift discussion back toward simpler rather than more complex
> configurations.  If a more dynamic standby registration procedure can get
> developed on schedule that's superior to that, great.  I think it really
> doesn't have to offer anything above automating what I outlined to be
> considered good enough initially though.
>
> And if the choice is between the stupid simple OMDPYNTCS idea I threw out
> and demanding a design too complicated to deliver in 9.1, I'm quite sure I'd
> rather have the hard to configure version that ships.  Things like keeping
> the master from having a hard-coded list of nodes and making it easy for
> every node to have an identical postgresql.conf are all great goals, but are
> also completely optional things for a first release from where I'm standing.
>  If a patch without any complicated registration stuff got committed
> tomorrow, and promises to add better registration on top of it in the next
> CommitFest didn't deliver, the project would still be able to announce "Sync
> Rep is here in 9.1" in a way people could and would use.  We wouldn't be
> proud of the UI, but that's normal in a "release early, release often"
> world.
>
> The parts that scare me about sync rep are not in how to configure it, it's
> in how it will break in completely unexpected ways related to the
> communications protocol.  And to even begin exploring that fully, something
> simple has to actually get committed, so that there's a solid target to kick
> off organized testing against.  That's the point I'm concerned about
> reaching as soon as feasible.  And if takes massive cuts in the flexibility
> or easy of configuration to get there quickly, so long as it doesn't
> actually hamper the core operating set here I would consider that a good
> trade.

Yes, let's please just implement something simple and get it
committed.  k = 1.  Two GUCs (synchronous_standbys = name, name, name
and synchronous_waitfor = none|recv|fsync|apply), SUSET so you can
change it per txn.  Done.  We can revise it *the day after it's
committed* if we agree on how.  And if we *don't* agree, then we can
ship it and we still win.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Greg Smith

Josh Berkus wrote:

This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added by this approach
vs. not having a list on the master at all?
  


That little design outline I threw out there wasn't intended to be a 
plan for right way to proceed here.  What I was trying to do is point 
out the minimum needed that would actually work for the use cases people 
want the most, to shift discussion back toward simpler rather than more 
complex configurations.  If a more dynamic standby registration 
procedure can get developed on schedule that's superior to that, great.  
I think it really doesn't have to offer anything above automating what I 
outlined to be considered good enough initially though.


And if the choice is between the stupid simple OMDPYNTCS idea I threw 
out and demanding a design too complicated to deliver in 9.1, I'm quite 
sure I'd rather have the hard to configure version that ships.  Things 
like keeping the master from having a hard-coded list of nodes and 
making it easy for every node to have an identical postgresql.conf are 
all great goals, but are also completely optional things for a first 
release from where I'm standing.  If a patch without any complicated 
registration stuff got committed tomorrow, and promises to add better 
registration on top of it in the next CommitFest didn't deliver, the 
project would still be able to announce "Sync Rep is here in 9.1" in a 
way people could and would use.  We wouldn't be proud of the UI, but 
that's normal in a "release early, release often" world.


The parts that scare me about sync rep are not in how to configure it, 
it's in how it will break in completely unexpected ways related to the 
communications protocol.  And to even begin exploring that fully, 
something simple has to actually get committed, so that there's a solid 
target to kick off organized testing against.  That's the point I'm 
concerned about reaching as soon as feasible.  And if takes massive cuts 
in the flexibility or easy of configuration to get there quickly, so 
long as it doesn't actually hamper the core operating set here I would 
consider that a good trade.


--
Greg Smith, 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services and Support  www.2ndQuadrant.us



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Greg Stark
On Thu, Oct 7, 2010 at 10:27 AM, Heikki Linnakangas
 wrote:
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>

Fwiw I was hoping it would be possible to set every machine up with an
identical postgresql.conf file. That doesn't preclude this idea since
you could start up your server with a script that sets the GUC on the
command-line and that script could use whatever it wants to look up
its name such as using its hardware info to look it up in a database.
But just something to keep in mind.

In particular I would want to be able to configure everything
identically and then have each node run some kind of program which
determines its name and position in the replication structure. This
implies that each node given its identity and the total view of the
structure can figure out what it should be doing including whether to
be read-only or read-write, who to contact as its master, and whether
to listen from slaves.

If every node needs a configuration file specifying multiple
interdependent variables which are all different from server to server
it'll be too hard to keep them all in sync. I would rather tell every
node, "here's how to push to the archive, here's how to pull, here's
the whole master-slave structure even the parts you don't need to know
about and the redundant entry for yourself -- now here's your name go
figure out whether to push or pull and from where"

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Robert Haas
On Thu, Oct 7, 2010 at 2:33 PM, Josh Berkus  wrote:
>> I think they work together fine.  Greg's idea is that you list the
>> important standbys, and a synchronization guarantee that you'd like to
>> have for at least one of them.  Simon's idea - at least at 10,000 feet
>> - is that you can take a pass on that guarantee for transactions that
>> don't need it.  I don't see why you can't have both.
>
> So, two things:
>
> 1) This version of Standby Registration seems to add One More Damn Place
> You Need To Configure Standby (OMDPYNTCS) without adding any
> functionality you couldn't get *without* having a list on the master.
> Can someone explain to me what functionality is added by this approach
> vs. not having a list on the master at all?

Well, then you couldn't have one strictly synchronous standby and one
asynchronous standby.

> 2) I see Simon's approach where you can designate not just synch/asynch,
> but synch *mode* per session to be valuable.  I can imagine having
> transactions I just want to "ack" vs. transactions I want to "apply"
> according to application logic (e.g. customer personal information vs.
> financial transactions).  This approach would still seem to remove that
> functionality.  Does it?

I'm not totally sure.  I think we could probably avoid removing that
with careful detailed design.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Josh Berkus

> I think they work together fine.  Greg's idea is that you list the
> important standbys, and a synchronization guarantee that you'd like to
> have for at least one of them.  Simon's idea - at least at 10,000 feet
> - is that you can take a pass on that guarantee for transactions that
> don't need it.  I don't see why you can't have both.

So, two things:

1) This version of Standby Registration seems to add One More Damn Place
You Need To Configure Standby (OMDPYNTCS) without adding any
functionality you couldn't get *without* having a list on the master.
Can someone explain to me what functionality is added by this approach
vs. not having a list on the master at all?

2) I see Simon's approach where you can designate not just synch/asynch,
but synch *mode* per session to be valuable.  I can imagine having
transactions I just want to "ack" vs. transactions I want to "apply"
according to application logic (e.g. customer personal information vs.
financial transactions).  This approach would still seem to remove that
functionality.  Does it?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Aidan Van Dyk
On Thu, Oct 7, 2010 at 1:27 PM, Heikki Linnakangas
 wrote:

> Let me check that I got this right, and add some details to make it more
> concrete: Each standby is given a name. It can be something like "boston1"
> or "testserver". It does *not* have to be unique across all standby servers.
> In the master, you have a list of important, synchronous, nodes that must
> acknowledge each commit before it is acknowledged to the client.
>
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
> The list of important nodes is also a GUC, in the master's configuration
> file:
>
> synchronous_standbys='bostonserver, oxfordserver'

+1.

It definitely covers the scenarios I want.

And even allows the ones I don't want, and don't understand either ;-)

I and personally, I'ld *love* it if the streaming replication protocol
was adjusted to that every streaming WAL client reported back their
role and recive/fsync/replay positions as part of the protocol
(allowing role and positions to be something "NULL"able/empty/0).  I
think Simon demonstrated that the overhead to report it isn't high.
Again, in the deployments I'm wanting, the "slave" isn't a PG server,
but something like Magnus's stream-to-archive, so I can't query the
slave to see how far behind it is.

a.

-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Robert Haas
On Thu, Oct 7, 2010 at 1:45 PM, Josh Berkus  wrote:
> On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
>> The standby name is a GUC in the standby's configuration file:
>>
>> standby_name='bostonserver'
>>
>> The list of important nodes is also a GUC, in the master's configuration
>> file:
>>
>> synchronous_standbys='bostonserver, oxfordserver'
>
> This seems to abandon Simon's concept of per-transaction synchronization
> control.  That seems like such a potentially useful feature that I'm
> reluctant to abandon it just for administrative elegance.
>
> Does this work together with that in some way I can't see?

I think they work together fine.  Greg's idea is that you list the
important standbys, and a synchronization guarantee that you'd like to
have for at least one of them.  Simon's idea - at least at 10,000 feet
- is that you can take a pass on that guarantee for transactions that
don't need it.  I don't see why you can't have both.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Robert Haas
On Thu, Oct 7, 2010 at 1:39 PM, Dave Page  wrote:
> On 10/7/10, Heikki Linnakangas  wrote:
>> On 06.10.2010 19:26, Greg Smith wrote:
>>> Now, the more relevant question, what I actually need in order for a
>>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>>> talk to. That would be a simple to configure setup where I list a subset
>>> of "important" nodes, and the appropriate acknowledgement level I want
>>> to hear from one of them. And when one of those nodes gives that
>>> acknowledgement, commit on the master happens too. That's it. For use
>>> cases like the commonly discussed "two local/two remote" situation, the
>>> two remote ones would be listed as the important ones.
>>
>> This feels like the best way forward to me. It gives some flexibility,
>> and doesn't need a new config file.
>>
>> Let me check that I got this right, and add some details to make it more
>> concrete: Each standby is given a name. It can be something like
>> "boston1" or "testserver". It does *not* have to be unique across all
>> standby servers. In the master, you have a list of important,
>> synchronous, nodes that must acknowledge each commit before it is
>> acknowledged to the client.
>>
>> The standby name is a GUC in the standby's configuration file:
>>
>> standby_name='bostonserver'
>>
>> The list of important nodes is also a GUC, in the master's configuration
>> file:
>>
>> synchronous_standbys='bostonserver, oxfordserver'
>>
>> To configure for a simple setup with a master and one synchronous
>> standby (which is not a very good setup from availability point of view,
>> as discussed to death), you give the standby a name, and put the same
>> name in synchronous_standbys in the master.
>>
>> To configure a setup with a master and two standbys, so that a commit is
>> acknowledged to client as soon as either one of the standbys acknowledge
>> it, you give both standbys the same name, and the same name in
>> synchronous_standbys list in the master. This is the configuration that
>> gives zero data loss in case one server fails, but also caters for
>> availability because you don't need to halt the master if one standby fails.
>>
>> To configure a setup with a master and two standbys, so that a commit is
>> acknowledged to client after *both* standbys acknowledge it, you give
>> both standbys a different name, and list both names in
>> synchronous_standbys_list in the master.
>>
>> I believe this will bend to most real life scenarios people have.
>
> +1. I think this would have met any needs of mine in my past life as a
> sysadmin/dba.

Before we get too far down the garden path here, this is actually
substantially more complicated than what Greg proposed.  Greg was
proposing, as have some other folks I think, to focus only on the k=1
case - in other words, only one acknowledgment would ever be required
for any given commit.  I think he's right to focus on that case,
because the multiple-ACKs-required solutions are quite a bit hairier.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Josh Berkus
On 10/7/10 10:27 AM, Heikki Linnakangas wrote:
> The standby name is a GUC in the standby's configuration file:
> 
> standby_name='bostonserver'
> 
> The list of important nodes is also a GUC, in the master's configuration
> file:
> 
> synchronous_standbys='bostonserver, oxfordserver'

This seems to abandon Simon's concept of per-transaction synchronization
control.  That seems like such a potentially useful feature that I'm
reluctant to abandon it just for administrative elegance.

Does this work together with that in some way I can't see?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Dave Page
On 10/7/10, Heikki Linnakangas  wrote:
> On 06.10.2010 19:26, Greg Smith wrote:
>> Now, the more relevant question, what I actually need in order for a
>> Sync Rep feature in 9.1 to be useful to the people who want it most I
>> talk to. That would be a simple to configure setup where I list a subset
>> of "important" nodes, and the appropriate acknowledgement level I want
>> to hear from one of them. And when one of those nodes gives that
>> acknowledgement, commit on the master happens too. That's it. For use
>> cases like the commonly discussed "two local/two remote" situation, the
>> two remote ones would be listed as the important ones.
>
> This feels like the best way forward to me. It gives some flexibility,
> and doesn't need a new config file.
>
> Let me check that I got this right, and add some details to make it more
> concrete: Each standby is given a name. It can be something like
> "boston1" or "testserver". It does *not* have to be unique across all
> standby servers. In the master, you have a list of important,
> synchronous, nodes that must acknowledge each commit before it is
> acknowledged to the client.
>
> The standby name is a GUC in the standby's configuration file:
>
> standby_name='bostonserver'
>
> The list of important nodes is also a GUC, in the master's configuration
> file:
>
> synchronous_standbys='bostonserver, oxfordserver'
>
> To configure for a simple setup with a master and one synchronous
> standby (which is not a very good setup from availability point of view,
> as discussed to death), you give the standby a name, and put the same
> name in synchronous_standbys in the master.
>
> To configure a setup with a master and two standbys, so that a commit is
> acknowledged to client as soon as either one of the standbys acknowledge
> it, you give both standbys the same name, and the same name in
> synchronous_standbys list in the master. This is the configuration that
> gives zero data loss in case one server fails, but also caters for
> availability because you don't need to halt the master if one standby fails.
>
> To configure a setup with a master and two standbys, so that a commit is
> acknowledged to client after *both* standbys acknowledge it, you give
> both standbys a different name, and list both names in
> synchronous_standbys_list in the master.
>
> I believe this will bend to most real life scenarios people have.

+1. I think this would have met any needs of mine in my past life as a
sysadmin/dba.

>
> Now, the other big fight is over "wait forever" vs "timeout".
> Personally, I'm stand firmly in the "wait forever" camp - you're nuts if
> you want a timeout. However, I can see that not everyone agrees :-).
> Fortunately, once we have robust "wait forever" behavior, it shouldn't
> be hard at all to add a timeout option on top of that, for those who
> want it. We should be able to have both options
>

I disagree that you're nuts if you want this feature fwiw. +1 on your
suggested plan though :-)

/D

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-07 Thread Heikki Linnakangas

On 06.10.2010 19:26, Greg Smith wrote:

Now, the more relevant question, what I actually need in order for a
Sync Rep feature in 9.1 to be useful to the people who want it most I
talk to. That would be a simple to configure setup where I list a subset
of "important" nodes, and the appropriate acknowledgement level I want
to hear from one of them. And when one of those nodes gives that
acknowledgement, commit on the master happens too. That's it. For use
cases like the commonly discussed "two local/two remote" situation, the
two remote ones would be listed as the important ones.


This feels like the best way forward to me. It gives some flexibility, 
and doesn't need a new config file.


Let me check that I got this right, and add some details to make it more 
concrete: Each standby is given a name. It can be something like 
"boston1" or "testserver". It does *not* have to be unique across all 
standby servers. In the master, you have a list of important, 
synchronous, nodes that must acknowledge each commit before it is 
acknowledged to the client.


The standby name is a GUC in the standby's configuration file:

standby_name='bostonserver'

The list of important nodes is also a GUC, in the master's configuration 
file:


synchronous_standbys='bostonserver, oxfordserver'

To configure for a simple setup with a master and one synchronous 
standby (which is not a very good setup from availability point of view, 
as discussed to death), you give the standby a name, and put the same 
name in synchronous_standbys in the master.


To configure a setup with a master and two standbys, so that a commit is 
acknowledged to client as soon as either one of the standbys acknowledge 
it, you give both standbys the same name, and the same name in 
synchronous_standbys list in the master. This is the configuration that 
gives zero data loss in case one server fails, but also caters for 
availability because you don't need to halt the master if one standby fails.


To configure a setup with a master and two standbys, so that a commit is 
acknowledged to client after *both* standbys acknowledge it, you give 
both standbys a different name, and list both names in 
synchronous_standbys_list in the master.


I believe this will bend to most real life scenarios people have.


Now, the other big fight is over "wait forever" vs "timeout". 
Personally, I'm stand firmly in the "wait forever" camp - you're nuts if 
you want a timeout. However, I can see that not everyone agrees :-). 
Fortunately, once we have robust "wait forever" behavior, it shouldn't 
be hard at all to add a timeout option on top of that, for those who 
want it. We should be able to have both options in 9.1.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-06 Thread Robert Haas
On Wed, Oct 6, 2010 at 12:26 PM, Greg Smith  wrote:
> Now, the more relevant question, what I actually need in order for a Sync
> Rep feature in 9.1 to be useful to the people who want it most I talk to.
>  That would be a simple to configure setup where I list a subset of
> "important" nodes, and the appropriate acknowledgement level I want to hear
> from one of them.  And when one of those nodes gives that acknowledgement,
> commit on the master happens too.  That's it.  For use cases like the
> commonly discussed "two local/two remote" situation, the two remote ones
> would be listed as the important ones.

That sounds fine to me.  How do the details work?  Each slave
publishes a name to the master via a recovery.conf parameter, and the
master has a GUC listing the names of the important slaves?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-06 Thread Greg Smith

Josh Berkus wrote:
However, I think we're getting way the heck away from how far we 
really want to go for 9.1.  Can I point out to people that synch rep 
is going to involve a fair bit of testing and debugging, and that 
maybe we don't want to try to implement The World's Most Configurable 
Standby Spec as a first step?


I came up with the following initial spec for Most Configurable Standby 
Setup Ever recently:


-The state of all available standby systems is exposed via a table-like 
interface, probably an SRF.
-As each standby reports back a result, its entry in the table is 
updated with what level of commit it has accomplished (recv, fsync, etc.)
-The table-like list of standby states is then passed to a function, 
that you could write in SQL or whatever else makes you happy.  The 
function returns a boolean for whether sufficient commit guarantees have 
been met yet.  You can make the conditions required as complicated as 
you like.
-Once that function returns true, commit on the master.  Otherwise 
return to waiting for standby responses.


So that's what I actually want here, because all subsets of it proposed 
so are way too boring.  If you cannot express every possible standby 
situation that anyone will ever think of via an arbitrary function hook, 
obviously it's not worth building at all.


Now, the more relevant question, what I actually need in order for a 
Sync Rep feature in 9.1 to be useful to the people who want it most I 
talk to.  That would be a simple to configure setup where I list a 
subset of "important" nodes, and the appropriate acknowledgement level I 
want to hear from one of them.  And when one of those nodes gives that 
acknowledgement, commit on the master happens too.  That's it.  For use 
cases like the commonly discussed "two local/two remote" situation, the 
two remote ones would be listed as the important ones.


Until something that simple is committed, tested, debugged, and had some 
run-ins with the real world, I have minimal faith that an attempt to 
anything more complicated has sufficient information to succeed.  And 
complete faith that even trying will fail to deliver something for 9.1.  
The scope creep that seems to be happening here in the name of "this 
will be hard to change so it must be right in the first version" boggles 
my mind.


--
Greg Smith, 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services and Support  www.2ndQuadrant.us



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Robert Haas
On Tue, Oct 5, 2010 at 2:30 PM, Simon Riggs  wrote:
> On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
>> Much of the engineering we are doing centers around use cases that are
>> considerably more complex than what most people will do in real life.
>
> Why are we doing it then?

Because some people will, and whatever architecture we pick now will
be with us for a very long time.  We needn't implement everything in
the first version, but we should try to avoid inextensible design
choices.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:

> Much of the engineering we are doing centers around use cases that are
> considerably more complex than what most people will do in real life.

Why are we doing it then?

What I have proposed behaves identically to Oracle Maximum Availability
mode. Though I have extended it with per-transaction settings and have
been able to achieve that with fewer parameters as well. Most
importantly, those settings need not change following failover.

The proposed "standby.conf" registration scheme is *stricter* than
Oracle's Maximum Availability mode, yet uses an almost identical
parameter framework. The behaviour is not useful for the majority of
production databases.

Requesting sync against *all* standbys is stricter even than the highest
level of Oracle: Maximum Protection. Why do we think we need a level of
strictness higher than Oracle's maximum level? And in the first release?

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Robert Haas
On Tue, Oct 5, 2010 at 12:40 PM, Simon Riggs  wrote:
>> Well, you only need to have the file at all on nodes you want to fail
>> over to.  And aren't you going to end up rejiggering the config when
>> you fail over anyway, based on what happened?  I mean, suppose you
>> have three servers and you require sync rep to 2 slaves.  If the
>> master falls over and dies, it seems likely you're going to want to
>> relax that restriction.  Or suppose you have three servers and you
>> require sync rep to 1 slave.  The first time you fail over, you're
>> going to probably want to leave that config as-is, but if you fail
>> over again, you're very likely going to want to change it.
>
> Single failovers are common. Multiple failovers aren't. For me, the key
> question is about what is the common case, not edge cases.

Hmm.  But even in the single failover cases, it's very possible that
you might want to make a change.  If you have two machines replicating
synchronously to each other in wait-forever and one of them goes down,
you're probably going to want to bring the other one up in
don't-wait-forever mode.  Or to take a slightly more complex example,
suppose you have two fast machines and a slow machine.  As long as
both fast machines are up, one will be the master and the other its
synchronous slave; the slow machine will be a reporting server.  But
if one of the fast machines dies, we might then want to make the slow
machine a synchronous slave just to make sure that our data remains
absolutely safe, even though it costs us some performance.

Using quorum_commit as a way to allow failover to happen and things to
keep humming along without configuration changes is a pretty clever
idea, but I think it only works in fairly specific cases.  For
example, the "three equal machines, sync me to one of the other two"
case is pretty slick, at least so long as you don't have more than one
failure.  I really can't improve on your design for that case; I'm not
sure there's any improvement to be had.  But I don't think your design
fits nearly as well in cases where the slaves aren't all equal, which
I actually think will be more common than not.

>> But since
>> that seems impossible to me, I'm arguing for centralizing the
>> configuration file for ease of management.
>
> You can't "centralize" something in 5 different places, at least not in
> my understanding of the word.

Every design we're talking about involves at least some configuration
on every machine in the cluster, AFAICS.  The no registration / quorum
commit solution sets the synchronization level and # of votes for each
standby on that standby, at least AIUI.  The registration solution
sets that stuff (and maybe other things, like a per-standby
wal_keep_segments) on the master, and the standby just provides a
name.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 11:46 -0400, Robert Haas wrote:
> On Tue, Oct 5, 2010 at 10:46 AM, Simon Riggs  wrote:
> > On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
> >> >>
> >> >> When you have one server functioning at each site you'll block until
> >> >> you get a third machine back, rather than replicating to both sites
> >> >> and remaining functional.
> >> >
> >> > And that is so important a consideration that you would like to move
> >> > from one parameter in one file to a whole set of parameters, set
> >> > differently in 5 separate files?
> >>
> >> I don't accept that this is the trade-off being proposed.  You seem
> >> convinced that having the config all in one place on the master is
> >> going to make things much more complicated, but I can't see why.
> >
> > But it is not "all in one place" because the file needs to be different
> > on 5 separate nodes. Which *does* make it more complicated than the
> > alternative is a single parameter, set the same everywhere.
> 
> Well, you only need to have the file at all on nodes you want to fail
> over to.  And aren't you going to end up rejiggering the config when
> you fail over anyway, based on what happened?  I mean, suppose you
> have three servers and you require sync rep to 2 slaves.  If the
> master falls over and dies, it seems likely you're going to want to
> relax that restriction.  Or suppose you have three servers and you
> require sync rep to 1 slave.  The first time you fail over, you're
> going to probably want to leave that config as-is, but if you fail
> over again, you're very likely going to want to change it.

Single failovers are common. Multiple failovers aren't. For me, the key
question is about what is the common case, not edge cases.

> This is really the key question for me.  If distributing the
> configuration throughout the cluster meant that we could just fail
> over and keep on trucking, that would be, well, really neat, and a
> very compelling argument for the design you are proposing.  

Good, thanks.

The important thing is in the minutes and hours immediately after
failover it will all still work; there is no need to change to a
different and very likely untested config.

If you configure N+1 or N+2 redundancy, we should assume that if you
lose a node you will be striving to quickly replace it rather than shrug
and say "you lose some".  And note as well, that when you do add that
other node back in, you won't need to change the config back again
afterwards. It all just works and keeps working, so the DBA can spend
his time investigating the issue and seeing if they can get the original
master back up, not keeping one eye on the config files of the remaining
servers.

> But since
> that seems impossible to me, I'm arguing for centralizing the
> configuration file for ease of management.

You can't "centralize" something in 5 different places, at least not in
my understanding of the word.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Robert Haas
On Tue, Oct 5, 2010 at 10:46 AM, Simon Riggs  wrote:
> On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
>> >>
>> >> When you have one server functioning at each site you'll block until
>> >> you get a third machine back, rather than replicating to both sites
>> >> and remaining functional.
>> >
>> > And that is so important a consideration that you would like to move
>> > from one parameter in one file to a whole set of parameters, set
>> > differently in 5 separate files?
>>
>> I don't accept that this is the trade-off being proposed.  You seem
>> convinced that having the config all in one place on the master is
>> going to make things much more complicated, but I can't see why.
>
> But it is not "all in one place" because the file needs to be different
> on 5 separate nodes. Which *does* make it more complicated than the
> alternative is a single parameter, set the same everywhere.

Well, you only need to have the file at all on nodes you want to fail
over to.  And aren't you going to end up rejiggering the config when
you fail over anyway, based on what happened?  I mean, suppose you
have three servers and you require sync rep to 2 slaves.  If the
master falls over and dies, it seems likely you're going to want to
relax that restriction.  Or suppose you have three servers and you
require sync rep to 1 slave.  The first time you fail over, you're
going to probably want to leave that config as-is, but if you fail
over again, you're very likely going to want to change it.

This is really the key question for me.  If distributing the
configuration throughout the cluster meant that we could just fail
over and keep on trucking, that would be, well, really neat, and a
very compelling argument for the design you are proposing.  But since
that seems impossible to me, I'm arguing for centralizing the
configuration file for ease of management.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 09:56 -0500, Kevin Grittner wrote:
> Simon Riggs  wrote:
>  
> > Is it a common use case that people have more than 3 separate
> > servers for one application, which is where the difference shows
> > itself.
>  
> I don't know how common it is, but we replicate circuit court data
> to two machines each at two sites.  That way a disaster which took
> out one building would leave us with the ability to run from the
> other building and still take a machine out of the production mix
> for scheduled maintenance or to survive a single-server failure at
> the other site.  Of course, there's no way we would make that
> replication synchronous, and we're replicating from dozens of source
> machines -- so I don't know if you can even count our configuration.
>  
> Still, the fact that we're replicating to two machines each at two
> sites and that is the same example which came to mind for Robert,
> suggests that perhaps it isn't *that* bizarre.

Hoping that you mean "bizarre" as "less common". I don't find Robert's
example in any way strange and respect his viewpoint.

I am looking for ways to simplify the specification so that we aren't
burdened with a level of complexity we can avoid in the majority if
cases. If we only need complex configuration to support a small minority
of cases, then I'd say we don't need that (yet). Adding that support
later will make it clearer what the additional cost/benefit is.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Josh Berkus



Another check: does specifying replication by server in such detail mean
we can't specify robustness at the transaction level? If we gave up that
feature, it would be a greatloss for performance tuning.


It's orthagonal.  The kinds of configurations we're talking about simply 
define what it will mean when you commit a transaction "with synch".


However, I think we're getting way the heck away from how far we really 
want to go for 9.1.  Can I point out to people that synch rep is going 
to involve a fair bit of testing and debugging, and that maybe we don't 
want to try to implement The World's Most Configurable Standby Spec as a 
first step?



--
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Kevin Grittner
Simon Riggs  wrote:
 
> Is it a common use case that people have more than 3 separate
> servers for one application, which is where the difference shows
> itself.
 
I don't know how common it is, but we replicate circuit court data
to two machines each at two sites.  That way a disaster which took
out one building would leave us with the ability to run from the
other building and still take a machine out of the production mix
for scheduled maintenance or to survive a single-server failure at
the other site.  Of course, there's no way we would make that
replication synchronous, and we're replicating from dozens of source
machines -- so I don't know if you can even count our configuration.
 
Still, the fact that we're replicating to two machines each at two
sites and that is the same example which came to mind for Robert,
suggests that perhaps it isn't *that* bizarre.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 10:41 -0400, Robert Haas wrote:
> >>
> >> When you have one server functioning at each site you'll block until
> >> you get a third machine back, rather than replicating to both sites
> >> and remaining functional.
> >
> > And that is so important a consideration that you would like to move
> > from one parameter in one file to a whole set of parameters, set
> > differently in 5 separate files?
> 
> I don't accept that this is the trade-off being proposed.  You seem
> convinced that having the config all in one place on the master is
> going to make things much more complicated, but I can't see why.

But it is not "all in one place" because the file needs to be different
on 5 separate nodes. Which *does* make it more complicated than the
alternative is a single parameter, set the same everywhere.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Robert Haas
On Tue, Oct 5, 2010 at 10:33 AM, Simon Riggs  wrote:
> On Tue, 2010-10-05 at 09:07 -0500, Kevin Grittner wrote:
>> Simon Riggs  wrote:
>> > Robert Haas wrote:
>> >> Simon Riggs  wrote:
>> >>> Josh Berkus wrote:
>> >>> Quorum commit, even with configurable vote weights, can't
>> >>> handle a requirement that a particular commit be replicated
>> >>> to (A || B) && (C || D).
>> >> Good point.
>> >>>
>> >>> Asking for quorum_commit = 3 would cover that requirement.
>> >>>
>> >>> Not exactly as requested,
>>
>> >> That's just not the same thing.
>> >
>> > In what important ways does it differ?
>>
>> When you have one server functioning at each site you'll block until
>> you get a third machine back, rather than replicating to both sites
>> and remaining functional.
>
> And that is so important a consideration that you would like to move
> from one parameter in one file to a whole set of parameters, set
> differently in 5 separate files?

I don't accept that this is the trade-off being proposed.  You seem
convinced that having the config all in one place on the master is
going to make things much more complicated, but I can't see why.

> Is it a common use case that people
> have more than 3 separate servers for one application, which is where
> the difference shows itself.

Much of the engineering we are doing centers around use cases that are
considerably more complex than what most people will do in real life.

> Another check: does specifying replication by server in such detail mean
> we can't specify robustness at the transaction level? If we gave up that
> feature, it would be a great loss for performance tuning.

No, I don't think it means that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 09:07 -0500, Kevin Grittner wrote:
> Simon Riggs  wrote:
> > Robert Haas wrote:
> >> Simon Riggs  wrote:
> >>> Josh Berkus wrote:
> >>> Quorum commit, even with configurable vote weights, can't
> >>> handle a requirement that a particular commit be replicated
> >>> to (A || B) && (C || D).
> >> Good point.
> >>>
> >>> Asking for quorum_commit = 3 would cover that requirement.
> >>>
> >>> Not exactly as requested,
>  
> >> That's just not the same thing.
> > 
> > In what important ways does it differ?
>  
> When you have one server functioning at each site you'll block until
> you get a third machine back, rather than replicating to both sites
> and remaining functional.

And that is so important a consideration that you would like to move
from one parameter in one file to a whole set of parameters, set
differently in 5 separate files? Is it a common use case that people
have more than 3 separate servers for one application, which is where
the difference shows itself.

Another check: does specifying replication by server in such detail mean
we can't specify robustness at the transaction level? If we gave up that
feature, it would be a great loss for performance tuning.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Markus Wanner
On 10/05/2010 04:07 PM, Kevin Grittner wrote:
> When you have one server functioning at each site you'll block until
> you get a third machine back, rather than replicating to both sites
> and remaining functional.

That's not a very likely failure scenario, but yes.

What if the admin wants to add a standby in Berlin, but still wants one
ack from each location? None of the current proposals make that simple
enough to not require adjustment in configuration.

Maybe defining something like: at least one from Berlin and at least one
from Tokyo (where Berlin and Tokyo could be defined by CIDR notation).
IMO that's closer to the admin's reality than a plain quorum but still
not as verbose as a full standby registration.

But maybe we should really defer that discussion...

Regards

Markus Wanner

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Kevin Grittner
Simon Riggs  wrote:
> Robert Haas wrote:
>> Simon Riggs  wrote:
>>> Josh Berkus wrote:
>>> Quorum commit, even with configurable vote weights, can't
>>> handle a requirement that a particular commit be replicated
>>> to (A || B) && (C || D).
>> Good point.
>>>
>>> Asking for quorum_commit = 3 would cover that requirement.
>>>
>>> Not exactly as requested,
 
>> That's just not the same thing.
> 
> In what important ways does it differ?
 
When you have one server functioning at each site you'll block until
you get a third machine back, rather than replicating to both sites
and remaining functional.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Tue, 2010-10-05 at 08:57 -0400, Robert Haas wrote:
> On Tue, Oct 5, 2010 at 8:34 AM, Simon Riggs  wrote:
> > On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
> >> >>> Quorum commit, even with configurable vote weights, can't handle a
> >> >>> requirement that a particular commit be replicated to (A || B) && (C
> >> >>> || D).
> >> >> Good point.
> >
> > Asking for quorum_commit = 3 would cover that requirement.
> >
> > Not exactly as requested, but in a way that is both simpler to express
> > and requires no changes to configuration after failover. ISTM better to
> > have a single parameter than 5 separate configuration files, with
> > behaviour that the community would not easily be able to validate.
> 
> That's just not the same thing.

In what important ways does it differ? In both cases, no reply will be
received until both sites have confirmed receipt.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Robert Haas
On Tue, Oct 5, 2010 at 8:34 AM, Simon Riggs  wrote:
> On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
>> >>> Quorum commit, even with configurable vote weights, can't handle a
>> >>> requirement that a particular commit be replicated to (A || B) && (C
>> >>> || D).
>> >> Good point.
>
> Asking for quorum_commit = 3 would cover that requirement.
>
> Not exactly as requested, but in a way that is both simpler to express
> and requires no changes to configuration after failover. ISTM better to
> have a single parameter than 5 separate configuration files, with
> behaviour that the community would not easily be able to validate.

That's just not the same thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Mon, 2010-10-04 at 12:45 -0700, Josh Berkus wrote:
> >>> Quorum commit, even with configurable vote weights, can't handle a
> >>> requirement that a particular commit be replicated to (A || B) && (C
> >>> || D).
> >> Good point.

Asking for quorum_commit = 3 would cover that requirement.

Not exactly as requested, but in a way that is both simpler to express
and requires no changes to configuration after failover. ISTM better to
have a single parameter than 5 separate configuration files, with
behaviour that the community would not easily be able to validate.

> If this is the only feature which standby registration is needed for,
> has anyone written the code for it yet?  Is anyone planning to?

(Not me)

> If not, it seems like standby registration is not *required* for 9.1.  I
> still tend to think it would be nice to have from a DBA perspective, but
> we should separate required from "nice to have".

Agreed.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-05 Thread Simon Riggs
On Mon, 2010-10-04 at 14:25 -0500, David Christensen wrote:

> Is there any benefit to be had from having standby roles instead of
> individual names?  For instance, you could integrate this into quorum
> commit to express 3 of 5 "reporting" standbys, 1 "berlin" standby and
> 1 "tokyo" standby from a group of multiple per data center, or even
> just utilize role sizes of 1 if you wanted individual standbys to be
> "named" in this fashion.  This role could be provided on connect of
> the standby is more-or-less tangential to the specific registration
> issue.

There is substantial benefit in that config.

If we want to do relaying and path minimization, as is possible with
Slony, we would want to do

M -> S1 -> S2 where M is in London, S1 and S2 are in Berlin.

so that the master sends data only once to Berlin.

If we send to a group, we can also allow things to continue working if
S1 goes down, since S2 might then know it could connect to M directly.

That's complex and not something for the first release, IMHO.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration

2010-10-05 Thread Dimitri Fontaine
Josh Berkus  writes:
 Quorum commit, even with configurable vote weights, can't handle a
 requirement that a particular commit be replicated to (A || B) && (C
 || D).
>>> Good point.

So I've been trying to come up with something manually and failed. I
blame the fever — without it maybe I wouldn't have tried…

Now, if you want this level of precision in the setup, all we seem to be
missing from the quorum facility as currently proposed would be to have
a quorum list instead (or a max, but that's not helping the "easy" side).

Given those weights: A3 B2 C4 D4 you can ask for a quorum of 6 and
you're covered for your case, except that C&&D is when you reach the
quorum but don't have what you asked. Have the quorum input accept [6,7]
and it's easy to setup. Do we want that?

> If not, it seems like standby registration is not *required* for 9.1.  I
> still tend to think it would be nice to have from a DBA perspective, but
> we should separate required from "nice to have".

+1.
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Markus Wanner
On 10/04/2010 11:32 PM, Robert Haas wrote:
> I think in the end
> this is not much different from standby registration; you still have
> registrations, they just represent groups of machines instead of
> single machines.

Such groups are often easy to represent in CIDR notation, which would
reduce the need for registering every single standby.

Anyway, I'm really with Josh on this. It's a configuration debate that
doesn't have much influence on the real implementation. As long as we
keep the 'what nodes and how long does the master wait' decision
flexible enough.

Regards

Markus Wanner

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Robert Haas
On Mon, Oct 4, 2010 at 3:25 PM, David Christensen  wrote:
>
> On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:
>
>> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner  wrote:
>>> On 10/04/2010 05:20 PM, Robert Haas wrote:
 Quorum commit, even with configurable vote weights, can't handle a
 requirement that a particular commit be replicated to (A || B) && (C
 || D).
>>>
>>> Good point.
>>>
>>> Can the proposed standby registration configuration format cover such a
>>> requirement?
>>
>> Well, if you can name the standbys, there's no reason there couldn't
>> be a parameter that takes a string that looks pretty much like the
>> above.  There are, of course, some situations that could be handled
>> more elegantly by quorum commit ("any 3 of 5 available standbys") but
>> the above is more general and not unreasonably longwinded for
>> reasonable numbers of standbys.
>
>
> Is there any benefit to be had from having standby roles instead of 
> individual names?  For instance, you could integrate this into quorum commit 
> to express 3 of 5 "reporting" standbys, 1 "berlin" standby and 1 "tokyo" 
> standby from a group of multiple per data center, or even just utilize role 
> sizes of 1 if you wanted individual standbys to be "named" in this fashion.  
> This role could be provided on connect of the standby is more-or-less 
> tangential to the specific registration issue.

It's possible to construct a commit rule that is sufficiently complex
that this can't handle it, but it has to be pretty hairy.  The
simplest example I can think of is A || ((B || C) && (D || E)).  And
you could even handle that if you allow standbys to belong to multiple
roles; in fact, I think you can handle arbitrary Boolean formulas that
way by converting to conjunctive normal form.  The use cases for such
complex formulas are fairly thin, though, so I'm not sure that's a
very compelling argument one way or the other.   I think in the end
this is not much different from standby registration; you still have
registrations, they just represent groups of machines instead of
single machines.

I think from a reporting point of view it's a little nicer to have
individual registrations rather than group registrations.  For
example, you might ask the master which slaves are connected and where
they are in the WAL stream, or something like that, and with
individual standby names that's a bit easier to puzzle out.  Of
course, you could have individual standby names (that are only for
identification) and use groups for everything else.  That's maybe a
bit more complicated (each slave needs to send the master a
name-for-identification and a group) but it's certainly workable.  We
might also in the future have cases where you want to group standbys
in one way for the commit-rule and another way for some other setting,
but I can't think of exactly what other setting you'd be likely to
want to set in a fashion orthogonal from commit rule, and even if we
did think of one, allowing standbys to be members of multiple groups
would solve that problem, too.  That feels a bit more complex to me,
but it's not that likely to happen in practice, so it would probably
be OK.  So I guess I think individual registrations are a bit cleaner
and likely to lead to slightly fewer knobs over the long term, but
group registrations seem like they could be made to work, too.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Josh Berkus

>>> Quorum commit, even with configurable vote weights, can't handle a
>>> requirement that a particular commit be replicated to (A || B) && (C
>>> || D).
>> Good point.

If this is the only feature which standby registration is needed for,
has anyone written the code for it yet?  Is anyone planning to?

If not, it seems like standby registration is not *required* for 9.1.  I
still tend to think it would be nice to have from a DBA perspective, but
we should separate required from "nice to have".


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Mike Rylander
On Mon, Oct 4, 2010 at 3:25 PM, David Christensen  wrote:
>
> On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:
>
>> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner  wrote:
>>> On 10/04/2010 05:20 PM, Robert Haas wrote:
 Quorum commit, even with configurable vote weights, can't handle a
 requirement that a particular commit be replicated to (A || B) && (C
 || D).
>>>
>>> Good point.
>>>
>>> Can the proposed standby registration configuration format cover such a
>>> requirement?
>>
>> Well, if you can name the standbys, there's no reason there couldn't
>> be a parameter that takes a string that looks pretty much like the
>> above.  There are, of course, some situations that could be handled
>> more elegantly by quorum commit ("any 3 of 5 available standbys") but
>> the above is more general and not unreasonably longwinded for
>> reasonable numbers of standbys.
>
>
> Is there any benefit to be had from having standby roles instead of 
> individual names?  For instance, you could integrate this into quorum commit 
> to express 3 of 5 "reporting" standbys, 1 "berlin" standby and 1 "tokyo" 
> standby from a group of multiple per data center, or even just utilize role 
> sizes of 1 if you wanted individual standbys to be "named" in this fashion.  
> This role could be provided on connect of the standby is more-or-less 
> tangential to the specific registration issue.
>

Big +1 FWIW.

-- 
Mike Rylander

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread David Christensen

On Oct 4, 2010, at 2:02 PM, Robert Haas wrote:

> On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner  wrote:
>> On 10/04/2010 05:20 PM, Robert Haas wrote:
>>> Quorum commit, even with configurable vote weights, can't handle a
>>> requirement that a particular commit be replicated to (A || B) && (C
>>> || D).
>> 
>> Good point.
>> 
>> Can the proposed standby registration configuration format cover such a
>> requirement?
> 
> Well, if you can name the standbys, there's no reason there couldn't
> be a parameter that takes a string that looks pretty much like the
> above.  There are, of course, some situations that could be handled
> more elegantly by quorum commit ("any 3 of 5 available standbys") but
> the above is more general and not unreasonably longwinded for
> reasonable numbers of standbys.


Is there any benefit to be had from having standby roles instead of individual 
names?  For instance, you could integrate this into quorum commit to express 3 
of 5 "reporting" standbys, 1 "berlin" standby and 1 "tokyo" standby from a 
group of multiple per data center, or even just utilize role sizes of 1 if you 
wanted individual standbys to be "named" in this fashion.  This role could be 
provided on connect of the standby is more-or-less tangential to the specific 
registration issue.

Regards,

David
--
David Christensen
End Point Corporation
da...@endpoint.com





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Robert Haas
On Mon, Oct 4, 2010 at 1:57 PM, Markus Wanner  wrote:
> On 10/04/2010 05:20 PM, Robert Haas wrote:
>> Quorum commit, even with configurable vote weights, can't handle a
>> requirement that a particular commit be replicated to (A || B) && (C
>> || D).
>
> Good point.
>
> Can the proposed standby registration configuration format cover such a
> requirement?

Well, if you can name the standbys, there's no reason there couldn't
be a parameter that takes a string that looks pretty much like the
above.  There are, of course, some situations that could be handled
more elegantly by quorum commit ("any 3 of 5 available standbys") but
the above is more general and not unreasonably longwinded for
reasonable numbers of standbys.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Markus Wanner
On 10/04/2010 05:20 PM, Robert Haas wrote:
> Quorum commit, even with configurable vote weights, can't handle a
> requirement that a particular commit be replicated to (A || B) && (C
> || D).

Good point.

Can the proposed standby registration configuration format cover such a
requirement?

Regards

Markus Wanner

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] standby registration (was: is sync rep stalled?)

2010-10-04 Thread Robert Haas
On Mon, Oct 4, 2010 at 3:08 AM, Markus Wanner  wrote:
> On 10/01/2010 05:06 PM, Dimitri Fontaine wrote:
>> Wait forever can be done without standby registration, with quorum commit.
>
> Yeah, I also think the only reason for standby registration is ease of
> configuration (if at all). There's no technical requirement for standby
> registration, AFAICS. Or does anybody know of a realistic use case
> that's possible with standby registration, but not with quorum commit?

Quorum commit, even with configurable vote weights, can't handle a
requirement that a particular commit be replicated to (A || B) && (C
|| D).

The use case is something like "we want to make sure we've replicated
to at least one of the two servers in the Berlin datacenter and at
least one of the two servers in the Hong Kong datacenter".

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-10-01 Thread Fujii Masao
On Thu, Sep 30, 2010 at 11:32 PM, Heikki Linnakangas
 wrote:
> The standby can already use restore_command to fetch WAL files from the
> archive. I don't see why the master should be involved in that.

To make the standby use restore_command to do that, you have to specify
something like scp in archive_command or set up the shared directory
(e.g., using NFS server). But I don't want to use both because they make
the installation complicated (e.g., I don't want to register the ssh key
with no password to use scp the WAL files from the master to the standby.
I don't want to purchase extra server for shared directory and set up
NFS server). I believe that it's the same reason why you implemented the
streaming backup tool.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-30 Thread Heikki Linnakangas

On 29.09.2010 11:46, Fujii Masao wrote:

Aside from standby registration itself, I have another thought for C). Keeping
many WAL files in pg_xlog of the master is not good design in the first place.
I cannot believe that pg_xlog in most systems has enough capacity to store many
WAL files for the standby.

Usually the place where many WAL files can be stored is the archive. So I've
been thinking to make walsender send the archived WAL file to the standby.
That is, when the WAL file required for the standby is not found in pg_xlog,
walsender restores it from the archive by executing restore_command that users
specified. Then walsender read the WAL file and send it.

Currently, if pg_xlog is not enough large in your system, you have to struggle
with the setup of warm-standby environment on streaming replication, to prevent
the WAL files still required for the standby from being deleted before shipping.
Many people would be disappointed about that fact.

The archived-log-shipping approach cuts out the need of setup of warm-standby
and wal_keep_segments. So that would make streaming replication easier to use.
Thought?


The standby can already use restore_command to fetch WAL files from the 
archive. I don't see why the master should be involved in that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-29 Thread Fujii Masao
On Thu, Sep 23, 2010 at 6:49 PM, Dimitri Fontaine
 wrote:
> Automatic registration is a good answer to both your points A)
> monitoring and C) wal_keep_segments, but needs some more thinking wrt
> security and authentication.

Aside from standby registration itself, I have another thought for C). Keeping
many WAL files in pg_xlog of the master is not good design in the first place.
I cannot believe that pg_xlog in most systems has enough capacity to store many
WAL files for the standby.

Usually the place where many WAL files can be stored is the archive. So I've
been thinking to make walsender send the archived WAL file to the standby.
That is, when the WAL file required for the standby is not found in pg_xlog,
walsender restores it from the archive by executing restore_command that users
specified. Then walsender read the WAL file and send it.

Currently, if pg_xlog is not enough large in your system, you have to struggle
with the setup of warm-standby environment on streaming replication, to prevent
the WAL files still required for the standby from being deleted before shipping.
Many people would be disappointed about that fact.

The archived-log-shipping approach cuts out the need of setup of warm-standby
and wal_keep_segments. So that would make streaming replication easier to use.
Thought?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-24 Thread Dimitri Fontaine
Heikki Linnakangas  writes:
> There's two separate concepts here:
>
> 1. Automatic registration. When a standby connects, its information gets
> permanently added to standby.conf file
>
> 2. Unregistered standbys. A standby connects, and its information is not in
> standby.conf. It's let in anyway, and standby.conf is unchanged.
>
> We'll need to support unregistered standbys, at least in asynchronous
> mode. It's also possible for synchronous standbys, but you can't have the
> "if the standby is disconnected, don't finish any commits until it
> reconnects and catches up" behavior without registration.

I don't see why we need to support unregistered standbys if we have
automatic registration. I'm thinking about that on and off and took time
to answer, but I fail to see the reason why you're saying that.

What I think we need is an easy way to manually unregister the standby
on the master, that would be part of the maintenance routine to
disconnect a standby. It seems like an admin function would do, and it
so happens that it's how it works with PGQ / londiste.

> I'm inclined to not do automatic registration, not for now at
> least. Registering a synchronous standby should not be taken lightly. If the
> standby gets accidentally added to standby.conf, the master will have to
> keep more WAL around and might delay all commits, depending on the options
> used.

For this reason I think we need to have an easy to use facility to check
the system health that includes showing how many WALs are currently kept
and which standby is registered to still need them. If you happen you
have forgotten to unregister your standby, time to call that admin
function from above.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-23 Thread Heikki Linnakangas

On 23/09/10 12:49, Dimitri Fontaine wrote:

Heikki Linnakangas  writes:

The consensus seems to be use a configuration file called
standby.conf. Let's use the "ini file format" for now [1].


What about automatic registration of standbys? That's not going to fly
with the unique global configuration file idea, but that's good news.

Automatic registration is a good answer to both your points A)
monitoring and C) wal_keep_segments, but needs some more thinking wrt
security and authentication.

What about having a new GRANT privilege for replication, so that any
standby can connect with a non-superuser role as soon as the master's
setup GRANTS replication to the role? You still need HBA setup to be
accepting the slave, too, of course.


There's two separate concepts here:

1. Automatic registration. When a standby connects, its information gets 
permanently added to standby.conf file


2. Unregistered standbys. A standby connects, and its information is not 
in standby.conf. It's let in anyway, and standby.conf is unchanged.


We'll need to support unregistered standbys, at least in asynchronous 
mode. It's also possible for synchronous standbys, but you can't have 
the "if the standby is disconnected, don't finish any commits until it 
reconnects and catches up" behavior without registration.


I'm inclined to not do automatic registration, not for now at least. 
Registering a synchronous standby should not be taken lightly. If the 
standby gets accidentally added to standby.conf, the master will have to 
keep more WAL around and might delay all commits, depending on the 
options used.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-23 Thread Dimitri Fontaine
Heikki Linnakangas  writes:
> Having mulled through all the recent discussions on synchronous replication,
> ISTM there is pretty wide consensus that having a registry of all standbys
> in the master is a good idea. Even those who don't think it's *necessary*
> for synchronous replication seem to agree that it's nevertheless a pretty
> intuitive way to configure it. And it has some benefits even if we never get
> synchronous replication.

Yeah it's nice to have, but I disagree with it being a nice way to
configure it. I still think that in the long run it's more hassle than a
distributed setup to maintain.

> The consensus seems to be use a configuration file called
> standby.conf. Let's use the "ini file format" for now [1].

What about automatic registration of standbys? That's not going to fly
with the unique global configuration file idea, but that's good news.

Automatic registration is a good answer to both your points A)
monitoring and C) wal_keep_segments, but needs some more thinking wrt
security and authentication.

What about having a new GRANT privilege for replication, so that any
standby can connect with a non-superuser role as soon as the master's
setup GRANTS replication to the role? You still need HBA setup to be
accepting the slave, too, of course.

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-23 Thread Heikki Linnakangas

On 23/09/10 12:32, Dimitri Fontaine wrote:

Heikki Linnakangas  writes:

Hmm, that situation can arise if there's a network glitch which leads the
standby to disconnect, but the master still considers the connection as
alive. When the standby reconnects, the master will see two simultaneous
connections from the same standby. In that scenario, you clearly want to
disconnect the old connetion in favor of the new one. Is there a scenario
where you'd want to keep the old connection instead and refuse the new
one?


Protection against spoofing? If connecting with the right IP is all it takes…


You also need to authenticate with a valid username and password, of 
course. As the patch stands, that needs to be a superuser, but we should 
aim for smarter authorization than that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-23 Thread Dimitri Fontaine
Heikki Linnakangas  writes:
> Hmm, that situation can arise if there's a network glitch which leads the
> standby to disconnect, but the master still considers the connection as
> alive. When the standby reconnects, the master will see two simultaneous
> connections from the same standby. In that scenario, you clearly want to
> disconnect the old connetion in favor of the new one. Is there a scenario
> where you'd want to keep the old connection instead and refuse the new
> one?

Protection against spoofing? If connecting with the right IP is all it takes…

Regards,
-- 
dim

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Bruce Momjian
Heikki Linnakangas wrote:
> (starting yet another thread to stay focused)
> 
> Having mulled through all the recent discussions on synchronous 
> replication, ISTM there is pretty wide consensus that having a registry 
> of all standbys in the master is a good idea. Even those who don't think 
> it's *necessary* for synchronous replication seem to agree that it's 
> nevertheless a pretty intuitive way to configure it. And it has some 
> benefits even if we never get synchronous replication.
> 
> So let's put synchronous replication aside for now, and focus on standby 
> registration first. Once we have that, the synchronous replication patch 
> will be much smaller and easier to review.
> 
> The consensus seems to be use a configuration file called standby.conf. 
> Let's use the "ini file format" for now [1].
> 
> Aside from synchronous replication, there are three nice things we can 
> do with a standby registry:
> 
> A) Make monitoring easier. Let's have a system view to show the status 
> of all standbys [2].

It would be interesting if we could fire triggers on changes to that
status view.  I can see that solving many user management needs.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Aidan Van Dyk
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas
 wrote:

>>> Should we allow multiple standbys with the same name to connect to
>>> the master?
>>
>> No.  The point of naming them is to uniquely identify them.
>
> Hmm, that situation can arise if there's a network glitch which leads the
> standby to disconnect, but the master still considers the connection as
> alive. When the standby reconnects, the master will see two simultaneous
> connections from the same standby. In that scenario, you clearly want to
> disconnect the old connetion in favor of the new one. Is there a scenario
> where you'd want to keep the old connection instead and refuse the new one?

$Bob turns restores a backup image of the slave to test some new stuff
in a dev environment, and it automatically connects.  Thanks to IPv4
and the NAT often necessary, they both *appear* to the real master as
the same IP address, even though, in the remote campus, they are on to
seperate "networks", all NATed through the 1 IP address...

Now, that's not (likely) to happen in a "sync rep" situation, but for
an async setup, and standby registration automatically being able to
keep WAL, etc, satellite offices with occasional network hickups (and
developper above mentioned developer VMs) make registration (and
centralized monitoring of the slaves) very interesting...

a.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Robert Haas
On Wed, Sep 22, 2010 at 10:19 AM, Heikki Linnakangas
 wrote:
>> No.  The point of naming them is to uniquely identify them.
>
> Hmm, that situation can arise if there's a network glitch which leads the
> standby to disconnect, but the master still considers the connection as
> alive. When the standby reconnects, the master will see two simultaneous
> connections from the same standby. In that scenario, you clearly want to
> disconnect the old connetion in favor of the new one.

+1 for making that the behavior.

> Is there a scenario
> where you'd want to keep the old connection instead and refuse the new one?

I doubt it.

> Perhaps that should be made configurable, so that you wouldn't need to list
> all standbys in the config file if you don't want to. Then you don't get any
> of the benefits of standby registration, though.

I think it's fine to have async slaves that don't want any special
features (like sync rep, or tracking how far behind they are in the
xlog stream) not mentioned in the config file.  But allowing multiple
slaves with the same name seems like complexity without any attendant
benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Heikki Linnakangas

On 22/09/10 16:54, Robert Haas wrote:

On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao  wrote:

What if the number of standby entries in standby.conf is more than
max_wal_senders? This situation is allowed if we treat standby.conf
as just access control list like pg_hba.conf. But if we have to ensure
that all the registered standbys can connect to the master, we should
emit the error in that case.


I don't think a cross-check between these settings makes much sense.
We should either get rid of max_wal_senders and make it always equal
to the number of defined standbys, or we should treat them as
independent settings.


Even with registration, we will continue to support anonymous 
asynchronous standbys that just connect and start streaming. We need 
some headroom for those.



But what if the
reloaded standby.conf has no entry for already connected standby?


We kick him out?


Sounds reasonable.


Should we allow multiple standbys with the same name to connect to
the master?


No.  The point of naming them is to uniquely identify them.


Hmm, that situation can arise if there's a network glitch which leads 
the standby to disconnect, but the master still considers the connection 
as alive. When the standby reconnects, the master will see two 
simultaneous connections from the same standby. In that scenario, you 
clearly want to disconnect the old connetion in favor of the new one. Is 
there a scenario where you'd want to keep the old connection instead and 
refuse the new one?


Perhaps that should be made configurable, so that you wouldn't need to 
list all standbys in the config file if you don't want to. Then you 
don't get any of the benefits of standby registration, though.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Robert Haas
On Wed, Sep 22, 2010 at 8:21 AM, Fujii Masao  wrote:
> What if the number of standby entries in standby.conf is more than
> max_wal_senders? This situation is allowed if we treat standby.conf
> as just access control list like pg_hba.conf. But if we have to ensure
> that all the registered standbys can connect to the master, we should
> emit the error in that case.

I don't think a cross-check between these settings makes much sense.
We should either get rid of max_wal_senders and make it always equal
to the number of defined standbys, or we should treat them as
independent settings.

> Should we allow standby.conf to be changed and reloaded while the
> server is running?

Yes.

> But what if the
> reloaded standby.conf has no entry for already connected standby?

We kick him out?

> Should we allow multiple standbys with the same name to connect to
> the master?

No.  The point of naming them is to uniquely identify them.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Standby registration

2010-09-22 Thread Fujii Masao
On Wed, Sep 22, 2010 at 5:43 PM, Heikki Linnakangas
 wrote:
> So let's put synchronous replication aside for now, and focus on standby
> registration first. Once we have that, the synchronous replication patch
> will be much smaller and easier to review.

Though I agree with standby registration, I'm still unclear what's standby
registration ;)

What if the number of standby entries in standby.conf is more than
max_wal_senders? This situation is allowed if we treat standby.conf
as just access control list like pg_hba.conf. But if we have to ensure
that all the registered standbys can connect to the master, we should
emit the error in that case.

Should we allow standby.conf to be changed and reloaded while the
server is running? This seems to be required if we use standby.conf
for replacement of wal_keep_segments. Because we need to register
the backup starting location as the last receive location of upcoming
standby when taking a base backup for that standby. But what if the
reloaded standby.conf has no entry for already connected standby?

If we treat standby.conf as just access control list, we can easily
allow it to be reloaded as well as pg_hba.conf is. Otherwise, we
would need a careful design.

Should we allow multiple standbys with the same name to connect to
the master? That is, entry in standby.conf and real standby should be
one-to-one relationship? Or we should add new parameter specifying
the number of standbys with the name?

> Any volunteers on implementing that? Fujii-san?

I'm willing to implement that. But I'll be busy for a few days
because of presentation at LinuxCon and so on. So please feel
free to try that if time allows.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Standby registration

2010-09-22 Thread Heikki Linnakangas

(starting yet another thread to stay focused)

Having mulled through all the recent discussions on synchronous 
replication, ISTM there is pretty wide consensus that having a registry 
of all standbys in the master is a good idea. Even those who don't think 
it's *necessary* for synchronous replication seem to agree that it's 
nevertheless a pretty intuitive way to configure it. And it has some 
benefits even if we never get synchronous replication.


So let's put synchronous replication aside for now, and focus on standby 
registration first. Once we have that, the synchronous replication patch 
will be much smaller and easier to review.


The consensus seems to be use a configuration file called standby.conf. 
Let's use the "ini file format" for now [1].


Aside from synchronous replication, there are three nice things we can 
do with a standby registry:


A) Make monitoring easier. Let's have a system view to show the status 
of all standbys [2].


B) Improve authorization. At the moment, we require superuser rights to 
connect for connecting in replication mode. That's pretty ugly, because 
superuser rights imply that you can do anything; you'd typically want to 
restrict access from the standby to do replication only and nothing 
else. You can lock it down in pg_hba.conf to allow the superuser to only 
connect in replication mode, but it still gives me the creeps. When each 
standby has a name, we can associate standbys with roles, so that you 
have to be user X to replicate as standby Y.


C) Smarter replacement for wal_keep_segments. Instead of always keeping 
wal_keep_segments WAL files around, once we know how far each standby 
has replicated, we can keep just the right amount. I think we'll still 
want a global upper limit to avoid running out of disk space in the 
master in case of emergency though.



Any volunteers on implementing that? Fujii-san?

[1] http://archives.postgresql.org/pgsql-hackers/2010-09/msg01195.php
[2] http://archives.postgresql.org/pgsql-hackers/2010-09/msg00932.php

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers