On Friday 09 February 2007, Jan Wieck wrote:
I am not sure, I would have to look at what exactly that hook provides.
The key to a Lamport timestamp is that it is advancing it commit order
(plus some other things ... of course). If the hook can guarantee that
the calls are made always in commit
On 2/8/2007 11:41 PM, Richard Troy wrote:
On Thu, 8 Feb 2007, Joshua D. Drake wrote:
Well how deep are we talking here? My understanding of what Jan wants to
do is simple.
Be able to declare which triggers are fired depending on the state of
the cluster.
In Jan's terms, the Origin or
On 2/7/2007 7:13 AM, José Orlando Pereira wrote:
On Saturday 03 February 2007, Bruce Momjian wrote:
Jan Wieck wrote:
I don't have any such paper and the proof of concept will be the
implementation of the system. I do however see enough resistance against
this proposal to withdraw the commit
Richard Troy wrote:
It'd be great if Jan considers the blending of replication; any given DB
instance shouldn't be only a master/originator or only a slave/subscriber.
A solution that lets you blend replication strategies in a single db is,
from my point of view, very important.
Perhaps if
On Feb 8, 2007, at 8:41 PM, Richard Troy wrote:
It'd be great if Jan considers the blending of replication; any
given DB
instance shouldn't be only a master/originator or only a slave/
subscriber.
A solution that lets you blend replication strategies in a single
db is,
from my point of
On Fri, 9 Feb 2007, Jan Wieck wrote:
[ I wrote ]
It'd be great if Jan considers the blending of replication;
Please elaborate. I would really like to get all you can contribute.
Thanks Jan,
prefaced that I really haven't read everything you've written on this (or
what other people are
On Feb 7, 8:12 pm, [EMAIL PROTECTED] (Bruce Momjian) wrote:
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
Thank you.
Richard Troy wrote:
In more specific terms, and I'm just brainstorming in public here, perhaps
we can use the power of Schemas within a database to manage such
divisions; commands which pertain to replication can/would include a
schema specifier and elements within the schema can be replicated
On 2/9/2007 2:27 PM, Richard Troy wrote:
In general terms, blending of replication [techniques] means to me that
one can have a single database instance serve as a master and as a slave
(to use only one set of terminology), and as a multi-master, too, all
simultaneously, letting the DBA /
On Fri, 9 Feb 2007, Andrew Dunstan wrote:
Richard Troy wrote:
In more specific terms, and I'm just brainstorming in public here, perhaps
we can use the power of Schemas within a database to manage such
divisions; commands which pertain to replication can/would include a
schema specifier
On Fri, 9 Feb 2007, Jan Wieck wrote:
No matter how many different models you have in parallel, one single
transaction will be either a master, a slave or an isolated local thing.
The proposed changes allow to tell the session which of these three
roles it is playing and the triggers and
On 2/9/2007 2:19 PM, Andrew Hammond wrote:
On Feb 7, 8:12 pm, [EMAIL PROTECTED] (Bruce Momjian) wrote:
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really
Richard Troy wrote:
On Fri, 9 Feb 2007, Andrew Dunstan wrote:
Richard Troy wrote:
In more specific terms, and I'm just brainstorming in public here, perhaps
we can use the power of Schemas within a database to manage such
divisions; commands which pertain to replication can/would
I wrote:
We do now have schema nesting, for this or any other purpose.
s/now/not/ (of course)
cheers
andrew
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 2/9/2007 3:25 PM, Andrew Dunstan wrote:
Richard Troy wrote:
On Fri, 9 Feb 2007, Andrew Dunstan wrote:
Richard Troy wrote:
In more specific terms, and I'm just brainstorming in public here, perhaps
we can use the power of Schemas within a database to manage such
divisions; commands
Yes, yes, and yes ... but aside from the problem that you use the very
ambiguous word timestamp (which somehow suggests using a clock of
some sort), isn't the begin timestamp of a long running transaction
imho a begin timestamp is near useless
worse than the commit timestamp, when all
On 2/7/2007 11:12 PM, Bruce Momjian wrote:
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
Thank you.
Also, I saw the trigger
Jan Wieck wrote:
On 2/7/2007 11:12 PM, Bruce Momjian wrote:
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
Thank
Bruce Momjian wrote:
Jan Wieck wrote:
On 2/7/2007 11:12 PM, Bruce Momjian wrote:
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
Also, I saw the trigger patch with no explaination of why it was
important or who would use it --- that also isn't going to fly well.
Alvaro Herrera wrote:
Is this a new policy that after discussion, all patches must be
resubmitted with a summary and conclusions of the discussion? I can
certainly do that for you, but just tell me if you are going to ask the
same from everyone.
No, I am asking only this time
On 2/8/2007 3:32 PM, Bruce Momjian wrote:
Alvaro Herrera wrote:
Is this a new policy that after discussion, all patches must be
resubmitted with a summary and conclusions of the discussion? I can
certainly do that for you, but just tell me if you are going to ask the
same from
Jan Wieck wrote:
On 2/8/2007 3:32 PM, Bruce Momjian wrote:
Alvaro Herrera wrote:
Is this a new policy that after discussion, all patches must be
resubmitted with a summary and conclusions of the discussion? I can
certainly do that for you, but just tell me if you are going to ask
Jan Wieck wrote:
On 2/8/2007 3:32 PM, Bruce Momjian wrote:
Alvaro Herrera wrote:
Is this a new policy that after discussion, all patches must be
resubmitted with a summary and conclusions of the discussion? I can
certainly do that for you, but just tell me if you are going to
ask the
I just want an outline of what each option is supposed to control. If
that information is in a documentation patch, then fine, he can just
post that and tell people to read the patch documentation.
---
Joshua D. Drake
On Thu, 8 Feb 2007, Joshua D. Drake wrote:
Well how deep are we talking here? My understanding of what Jan wants to
do is simple.
Be able to declare which triggers are fired depending on the state of
the cluster.
In Jan's terms, the Origin or Subscriber. In Replicator terms the Master
or
On Saturday 03 February 2007, Bruce Momjian wrote:
Jan Wieck wrote:
I don't have any such paper and the proof of concept will be the
implementation of the system. I do however see enough resistance against
this proposal to withdraw the commit timestamp at this time. The new
replication
On 2/7/2007 2:37 AM, Markus Schiltknecht wrote:
Hi,
Jan Wieck wrote:
Whatever strategy one will use, in an async multimaster there are always
cases that can be resolved by rules (last update being one of them), and
some that I can't even imagine solving so far. I guess some of the cases
will
Hi,
Jan Wieck wrote:
Are we still discussing if the Postgres backend may provide support for
a commit timestamp, that follows the rules for Lamport timestamps in a
multi-node cluster?
No. And I think you know my opinion about that by now. ;-)
It seems more like we are drifting into what
Jan Wieck wrote:
Are we still discussing if the Postgres backend may provide support for
a commit timestamp, that follows the rules for Lamport timestamps in a
multi-node cluster?
...I thought you said in this thread that you haven't and weren't going to
work on any kind of logical proof
On 2/7/2007 12:54 PM, Markus Schiltknecht wrote:
Hi,
Jan Wieck wrote:
Are we still discussing if the Postgres backend may provide support for
a commit timestamp, that follows the rules for Lamport timestamps in a
multi-node cluster?
No. And I think you know my opinion about that by now. ;-)
Hi,
Jan Wieck wrote:
Then let me give you a little puzzle just for the fun of it.
A database containing customer contact information (among other things)
is a two node multimaster system. One is serving the customer web
portal, the other is used by the company staff including the call
On 2/7/2007 2:15 PM, Richard Troy wrote:
Jan Wieck wrote:
Are we still discussing if the Postgres backend may provide support for
a commit timestamp, that follows the rules for Lamport timestamps in a
multi-node cluster?
...I thought you said in this thread that you haven't and weren't
On 2/7/2007 9:27 PM, Markus Schiltknecht wrote:
Hi,
Jan Wieck wrote:
Then let me give you a little puzzle just for the fun of it.
A database containing customer contact information (among other things)
is a two node multimaster system. One is serving the customer web
portal, the other is
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
What I did want to hear is a layout of how the system would work, and an
exchange of ideas until almost everyone was happy.
Also, I saw the
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
Thank you.
Also, I saw the trigger patch with no explaination of why it was
important or who would
Jan Wieck wrote:
On 2/7/2007 10:35 PM, Bruce Momjian wrote:
I find the term logical proof of it's correctness too restrictive. It
sounds like some formal academic process that really doesn't work well
for us.
Thank you.
Also, I saw the trigger patch with no explaination of why it
What I'm more concerned about, with Jan's proposal, is the assumption
that you always want to resolve conflicts by time (except for
balances,
for which we don't have much information, yet). I'd rather
Um, I think the proposal was only for beneficial backend functionality
for replication
Hi,
Zeugswetter Andreas ADI SD wrote:
And time based
is surely one of the important conflict resolution methods for async MM
replication.
That's what I'm questioning. Wouldn't any other deterministic, but
seemingly random abort decision be as clever as time based conflict
resolution? It
On 2/6/2007 11:44 AM, Markus Schiltknecht wrote:
Hi,
Zeugswetter Andreas ADI SD wrote:
And time based
is surely one of the important conflict resolution methods for async MM
replication.
That's what I'm questioning. Wouldn't any other deterministic, but
seemingly random abort decision be as
Something worth noting... the only places I've actually seen MM
replication implemented, each master was in fact still responsible
for it's own set of data. It was essentially something that you could
really do with Slony, if you could tolerate the extreme complexity
that would be
Hi,
Jan Wieck wrote:
Whatever strategy one will use, in an async multimaster there are always
cases that can be resolved by rules (last update being one of them), and
some that I can't even imagine solving so far. I guess some of the cases
will simply boil down to the application has to make
One concept is to have a univeral clock that ticks forward (like
every second) and each node orders all their transactions inside the
second-granular tick. Then each commit would be like: {node,
clocksecond, txn#} and each time the clock ticks forward, txn# is
reset to zero. This
On Sun, Feb 04, 2007 at 01:36:03PM -0500, Jan Wieck wrote:
For the fourth time, the clock is in the mix to allow to continue during
a network outage. All your arguments seem to assume 100% network uptime.
There will be no clusterwide clock or clusterwide increment when you
lose connection.
Hi,
Theo Schlossnagle wrote:
On Feb 4, 2007, at 1:36 PM, Jan Wieck wrote:
Obviously the counters will immediately drift apart based on the
transaction load of the nodes as soon as the network goes down. And in
order to avoid this clock confusion and wrong expectation, you'd
rather have a
Jan Wieck wrote:
This is all that is needed for last update wins resolution. And as
said before, the only reason the clock is involved in this is so that
nodes can continue autonomously when they lose connection without
conflict resolution going crazy later on, which it would do if they
were
On 2/4/2007 3:16 AM, Peter Eisentraut wrote:
Jan Wieck wrote:
This is all that is needed for last update wins resolution. And as
said before, the only reason the clock is involved in this is so that
nodes can continue autonomously when they lose connection without
conflict resolution going
On Feb 4, 2007, at 10:06 AM, Jan Wieck wrote:
On 2/4/2007 3:16 AM, Peter Eisentraut wrote:
Jan Wieck wrote:
This is all that is needed for last update wins resolution. And as
said before, the only reason the clock is involved in this is so
that
nodes can continue autonomously when they
Theo Schlossnagle [EMAIL PROTECTED] writes:
As the clock must be incremented clusterwide, the need for it to be insync
with
the system clock (on any or all of the systems) is obviated. In fact, as you
can't guarantee the synchronicity means that it can be confusing -- one
expects a
On 2/4/2007 10:53 AM, Theo Schlossnagle wrote:
As the clock must be incremented clusterwide, the need for it to be
insync with the system clock (on any or all of the systems) is
obviated. In fact, as you can't guarantee the synchronicity means
that it can be confusing -- one expects a
On Feb 4, 2007, at 1:36 PM, Jan Wieck wrote:
On 2/4/2007 10:53 AM, Theo Schlossnagle wrote:
As the clock must be incremented clusterwide, the need for it to
be insync with the system clock (on any or all of the systems)
is obviated. In fact, as you can't guarantee the synchronicity
On 2/1/2007 11:23 PM, Jim Nasby wrote:
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database configurable tslog_priority is given, the
timestamp will be truncated to milliseconds and the increment logic
is done on milliseconds. The priority is added to the timestamp.
This
On Feb 3, 2007, at 3:52 PM, Jan Wieck wrote:
On 2/1/2007 11:23 PM, Jim Nasby wrote:
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database configurable tslog_priority is given, the
timestamp will be truncated to milliseconds and the increment
logic is done on milliseconds. The
On 2/3/2007 4:05 PM, Theo Schlossnagle wrote:
On Feb 3, 2007, at 3:52 PM, Jan Wieck wrote:
On 2/1/2007 11:23 PM, Jim Nasby wrote:
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database configurable tslog_priority is given, the
timestamp will be truncated to milliseconds and the
On Feb 3, 2007, at 4:38 PM, Jan Wieck wrote:
On 2/3/2007 4:05 PM, Theo Schlossnagle wrote:
On Feb 3, 2007, at 3:52 PM, Jan Wieck wrote:
On 2/1/2007 11:23 PM, Jim Nasby wrote:
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database configurable tslog_priority is given, the
On 2/3/2007 4:58 PM, Theo Schlossnagle wrote:
On Feb 3, 2007, at 4:38 PM, Jan Wieck wrote:
On 2/3/2007 4:05 PM, Theo Schlossnagle wrote:
On Feb 3, 2007, at 3:52 PM, Jan Wieck wrote:
On 2/1/2007 11:23 PM, Jim Nasby wrote:
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database
On Feb 3, 2007, at 5:09 PM, Jan Wieck wrote:
On 2/3/2007 4:58 PM, Theo Schlossnagle wrote:
I don't have any such paper and the proof of concept will be the
implementation of the system. I do however see enough resistance
against this proposal to withdraw the commit timestamp at this
Jan Wieck wrote:
I don't have any such paper and the proof of concept will be the
implementation of the system. I do however see enough resistance against
this proposal to withdraw the commit timestamp at this time. The new
replication system will therefore require the installation of a
On 2/3/2007 5:20 PM, Bruce Momjian wrote:
Jan Wieck wrote:
I don't have any such paper and the proof of concept will be the
implementation of the system. I do however see enough resistance against
this proposal to withdraw the commit timestamp at this time. The new
replication system will
On Jan 25, 2007, at 6:16 PM, Jan Wieck wrote:
If a per database configurable tslog_priority is given, the
timestamp will be truncated to milliseconds and the increment logic
is done on milliseconds. The priority is added to the timestamp.
This guarantees that no two timestamps for commits
Jan Wieck [EMAIL PROTECTED] writes:
I think the system I described is a slightly modified Lamport generator. The
maximum timestamp of any row updated in this transaction, you can consider
that
the counters received from other nodes. Then I make sure that the next
counter (timestamp) is
On 1/27/2007 7:26 AM, Gregory Stark wrote:
Jan Wieck [EMAIL PROTECTED] writes:
I think the system I described is a slightly modified Lamport generator. The
maximum timestamp of any row updated in this transaction, you can consider that
the counters received from other nodes. Then I make sure
I would be *very* concerned that system time is not a guaranteed
monotonic entity. Surely a counter or other internally managed mechanism
would be a better solution.
Furthermore, what would be the ramifications of master and slave system
times being out of sync?
Finally what if system time
Hi,
Jan Wieck wrote:
The replication system I have in mind will have another field type of
the balance nature, where it will never communicate the current value
but only deltas that get applied regardless of the two timestamps.
I'd favor a more generally usable conflict resolution function
Jan Wieck wrote:
But it is a datum that needs to be collected at the moment where
basically the clog entry is made ... I don't think any external module
can do that ever.
How atomic does it need to be? External modules can register callbacks
that get called right after the clog update and
Jan, et. al.,
On Jan 26, 2007, at 2:37 AM, Naz Gassiep wrote:
I would be *very* concerned that system time is not a guaranteed
monotonic entity. Surely a counter or other internally managed
mechanism would be a better solution.
As you should be concerned. Looking on my desk through the
On 1/26/2007 2:37 AM, Naz Gassiep wrote:
I would be *very* concerned that system time is not a guaranteed
monotonic entity. Surely a counter or other internally managed mechanism
would be a better solution.
Such a counter has only local relevance. How do you plan to compare
the two separate
On Thu, 2007-01-25 at 18:16 -0500, Jan Wieck wrote:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of timestamptz values. On commit, the current system time
will be taken. As long as
On 1/26/2007 8:26 AM, Simon Riggs wrote:
On Thu, 2007-01-25 at 18:16 -0500, Jan Wieck wrote:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of timestamptz values. On commit, the current
* Jan Wieck ([EMAIL PROTECTED]) wrote:
On 1/26/2007 2:37 AM, Naz Gassiep wrote:
I would be *very* concerned that system time is not a guaranteed
monotonic entity. Surely a counter or other internally managed mechanism
would be a better solution.
Such a counter has only local relevance.
Stephen Frost wrote:
I'd also suggest you look into Lamport timestamps... Trusting the
system clock just isn't practical, even with NTP. I've developed
(albeit relatively small) systems using Lamport timestamps and would be
happy to talk about it offlist. I've probably got some code I could
On 1/26/2007 9:38 AM, Stephen Frost wrote:
* Jan Wieck ([EMAIL PROTECTED]) wrote:
On 1/26/2007 2:37 AM, Naz Gassiep wrote:
I would be *very* concerned that system time is not a guaranteed
monotonic entity. Surely a counter or other internally managed mechanism
would be a better solution.
For a future multimaster replication system, I will need a couple of
features in the PostgreSQL server itself. I will submit separate
proposals per feature so that discussions can be kept focused on one
feature per thread.
For conflict resolution purposes in an asynchronous multimaster
On Thu, 2007-01-25 at 18:16 -0500, Jan Wieck wrote:
For conflict resolution purposes in an asynchronous multimaster system,
the last update definition often comes into play. For this to work,
the system must provide a monotonically increasing timestamp taken at
the commit of a transaction.
Jan Wieck [EMAIL PROTECTED] writes:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of timestamptz values.
Why should everybody be made to pay this overhead?
The COMMIT syntax will get
On 1/25/2007 6:47 PM, Neil Conway wrote:
On Thu, 2007-01-25 at 18:16 -0500, Jan Wieck wrote:
For conflict resolution purposes in an asynchronous multimaster system,
the last update definition often comes into play. For this to work,
the system must provide a monotonically increasing timestamp
Jan Wieck [EMAIL PROTECTED] writes:
On 1/25/2007 6:47 PM, Neil Conway wrote:
Would this feature have any use beyond the specific project/algorithm
you have in mind?
The tablelog project on pgfoundry currently uses the transactions start
time but would be very delighted to have the commit
On 1/25/2007 6:49 PM, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of timestamptz values.
Why should everybody be made to pay this
On 1/25/2007 7:41 PM, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
On 1/25/2007 6:47 PM, Neil Conway wrote:
Would this feature have any use beyond the specific project/algorithm
you have in mind?
The tablelog project on pgfoundry currently uses the transactions start
time but would
On Thu, 25 Jan 2007, Jan Wieck wrote:
For a future multimaster replication system, I will need a couple of
features in the PostgreSQL server itself. I will submit separate
proposals per feature so that discussions can be kept focused on one
feature per thread.
Hmm... will need ... Have you
On 1/25/2007 8:42 PM, Richard Troy wrote:
On Thu, 25 Jan 2007, Jan Wieck wrote:
For a future multimaster replication system, I will need a couple of
features in the PostgreSQL server itself. I will submit separate
proposals per feature so that discussions can be kept focused on one
feature per
Jan Wieck wrote:
On 1/25/2007 6:49 PM, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of timestamptz values.
Why should
On 1/25/2007 11:41 PM, Bruce Momjian wrote:
Jan Wieck wrote:
On 1/25/2007 6:49 PM, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
To provide this data, I would like to add another log directory,
pg_tslog. The files in this directory will be similar to the clog, but
contain arrays of
82 matches
Mail list logo