On Sun, 2010-05-23 at 16:21 -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become visible.
This information could theoretically be extracted from the WAL, but
scanning the entire WAL
Excerpts from Marko Kreen's message of jue jun 10 18:10:50 -0400 2010:
Jan's proposal of storing small struct into segmented files
sounds like it could work. Can't say anything more because
I can't imagine it as well as Jan. Would need to play with
working implementation to say more...
We
On 6/4/10, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jun 4, 2010 at 10:44 AM, Greg Stark gsst...@mit.edu wrote:
A function which takes a starting xid and a number of transactions to
return seems very tied to one particular application. I could easily
see other systems such as a
On Fri, Jun 4, 2010 at 2:32 AM, Robert Haas robertmh...@gmail.com wrote:
I find the skeptical attitude on this thread altogether unwarranted.
Jan made his case and, at least IMHO, presented it pretty clearly.
Just to be clear I think the idea of exposing commit order is a
no-brainer. The
On Fri, Jun 4, 2010 at 10:44 AM, Greg Stark gsst...@mit.edu wrote:
On Fri, Jun 4, 2010 at 2:32 AM, Robert Haas robertmh...@gmail.com wrote:
I find the skeptical attitude on this thread altogether unwarranted.
Jan made his case and, at least IMHO, presented it pretty clearly.
Just to be clear
Excerpts from Jan Wieck's message of jue jun 03 19:52:19 -0400 2010:
On 6/3/2010 7:11 PM, Alvaro Herrera wrote:
Why not send separate numbers of tuple inserts/updates/deletes, which we
already have from pgstats?
We only have them for the entire database. The purpose of this is just a
On 6/4/2010 10:44 AM, Greg Stark wrote:
On Fri, Jun 4, 2010 at 2:32 AM, Robert Haas robertmh...@gmail.com wrote:
I find the skeptical attitude on this thread altogether unwarranted.
Jan made his case and, at least IMHO, presented it pretty clearly.
Just to be clear I think the idea of
On 6/4/2010 12:52 PM, Alvaro Herrera wrote:
Excerpts from Jan Wieck's message of jue jun 03 19:52:19 -0400 2010:
On 6/3/2010 7:11 PM, Alvaro Herrera wrote:
Why not send separate numbers of tuple inserts/updates/deletes, which we
already have from pgstats?
We only have them for the entire
On 6/2/2010 7:49 PM, Greg Stark wrote:
On Wed, Jun 2, 2010 at 6:45 PM, Chris Browne cbbro...@acm.org wrote:
It would make it easy to conclude:
This next transaction did 8328194 updates. Maybe we should do
some kind of checkpoint (e.g. - commit transaction or such) before
working on it.
Jan Wieck wrote:
The point is not that we don't have that information now. The point is
having a hint BEFORE wading through possibly gigabytes of WAL or log data.
If getting that information requires to read all the log data twice or
the need to read gigabytes of otherwise useless WAL data
Jan Wieck janwi...@yahoo.com wrote:
I actually have a hard time understanding why people are so
opposed to a feature that has zero impact at all unless a DBA
actually turns in ON. What is the problem with exposing the
commit order of transactions?
FWIW, once I came to understand the use
On 6/3/2010 4:04 PM, Bruce Momjian wrote:
If you want to fork Postgres and add it, go ahead, but if the community
has to maintain the code and document it, we care.
That comment was rather unprofessional. I think the rest of us still try
to find the best solution for the problem, not kill the
br...@momjian.us (Bruce Momjian) writes:
Jan Wieck wrote:
The point is not that we don't have that information now. The point is
having a hint BEFORE wading through possibly gigabytes of WAL or log data.
If getting that information requires to read all the log data twice or
the need to
gsst...@mit.edu (Greg Stark) writes:
On Wed, Jun 2, 2010 at 6:45 PM, Chris Browne cbbro...@acm.org wrote:
It would make it easy to conclude:
This next transaction did 8328194 updates. Maybe we should do
some kind of checkpoint (e.g. - commit transaction or such) before
working on it.
On Thu, Jun 3, 2010 at 8:50 PM, Jan Wieck janwi...@yahoo.com wrote:
I'm puzzled how you would define this value. How do you add 7 inserts,
7 deletes, and 7 updates? Is that 21 rows modified?
I actually have a hard time understanding why people are so opposed to a
feature that has zero impact
Greg Stark gsst...@mit.edu wrote:
what kind of interface you need
For the potential uses I can see, it would be great to have a SRF
which took two parameters: xid of last known commit and a limit how
many commits past that to return. Perhaps a negative number could
move earlier in time, if
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Greg Stark gsst...@mit.edu wrote:
what kind of interface you need
For the potential uses I can see, it would be great to have a SRF
which took two parameters: xid of last known commit and a limit
how many commits past that to return.
Jan Wieck wrote:
On 6/3/2010 4:04 PM, Bruce Momjian wrote:
If you want to fork Postgres and add it, go ahead, but if the community
has to maintain the code and document it, we care.
That comment was rather unprofessional. I think the rest of us still try
to find the best solution for the
Bruce Momjian wrote:
Jan Wieck wrote:
On 6/3/2010 4:04 PM, Bruce Momjian wrote:
If you want to fork Postgres and add it, go ahead, but if the community
has to maintain the code and document it, we care.
That comment was rather unprofessional. I think the rest of us still try
to
Excerpts from Chris Browne's message of jue jun 03 16:21:35 -0400 2010:
What this offers is *SOME* idea of how much updating work a particular
transaction did. It's a bit worse than you suggest:
- If replication triggers have captured tuples, those would get
counted.
- TOAST
On 6/3/2010 5:58 PM, Greg Stark wrote:
On Thu, Jun 3, 2010 at 8:50 PM, Jan Wieck janwi...@yahoo.com wrote:
I'm puzzled how you would define this value. How do you add 7 inserts,
7 deletes, and 7 updates? Is that 21 rows modified?
I actually have a hard time understanding why people are so
On 6/3/2010 6:24 PM, Kevin Grittner wrote:
Apologies for not looking back to the start of the thread before
that last post. It was all laid out right at the start.
No need to apologize. Happens.
Jan
--
Anyone who trades liberty for security deserves neither
liberty nor security. --
On 6/3/2010 7:11 PM, Alvaro Herrera wrote:
Excerpts from Chris Browne's message of jue jun 03 16:21:35 -0400 2010:
What this offers is *SOME* idea of how much updating work a particular
transaction did. It's a bit worse than you suggest:
- If replication triggers have captured tuples, those
On Thu, Jun 3, 2010 at 6:29 PM, Bruce Momjian br...@momjian.us wrote:
Jan Wieck wrote:
On 6/3/2010 4:04 PM, Bruce Momjian wrote:
If you want to fork Postgres and add it, go ahead, but if the community
has to maintain the code and document it, we care.
That comment was rather
Bruce Momjian wrote:
I care about cleaning up more of the mistakes, made in the original
development of Slony. Namely using hacks and kluges to implement
details, not supported by a current version of PostgreSQL. Londiste and
Slony made a good leap on that with the txid data type. Slony
d...@csail.mit.edu (Dan Ports) writes:
I'm not clear on why the total rowcount is useful, but perhaps I'm
missing something obvious.
It would make it easy to conclude:
This next transaction did 8328194 updates. Maybe we should do
some kind of checkpoint (e.g. - commit transaction or
heikki.linnakan...@enterprisedb.com (Heikki Linnakangas) writes:
On 24/05/10 19:51, Kevin Grittner wrote:
The only thing I'm confused about is what benefit anyone expects to
get from looking at data between commits in some way other than our
current snapshot mechanism. Can someone explain a
On Wed, Jun 2, 2010 at 6:45 PM, Chris Browne cbbro...@acm.org wrote:
It would make it easy to conclude:
This next transaction did 8328194 updates. Maybe we should do
some kind of checkpoint (e.g. - commit transaction or such) before
working on it.
versus
This transaction we're
On 5/28/2010 7:19 PM, Bruce Momjian wrote:
Jan Wieck wrote:
Reading the entire WAL just to find all COMMIT records, then go back to
the origin database to get the actual replication log you're looking for
is simpler and more efficient? I don't think so.
Agreed, but I think I've not
Jan Wieck wrote:
I see no problem with integrating that into core, technically or
philosophically.
Which means that if I want to allow a consumer of that commit order data
to go offline for three days or so to replicate the 5 requested, low
volume tables, the origin needs to
On 6/1/2010 11:09 AM, Bruce Momjian wrote:
Jan Wieck wrote:
I see no problem with integrating that into core, technically or
philosophically.
Which means that if I want to allow a consumer of that commit order data
to go offline for three days or so to replicate the 5 requested, low
Jan Wieck wrote:
Reading the entire WAL just to find all COMMIT records, then go back to
the origin database to get the actual replication log you're looking for
is simpler and more efficient? I don't think so.
Agreed, but I think I've not explained myself well enough.
I proposed
On May 28, 2010, at 7:19 PM, Bruce Momjian br...@momjian.us wrote:
Jan Wieck wrote:
Reading the entire WAL just to find all COMMIT records, then go
back to
the origin database to get the actual replication log you're
looking for
is simpler and more efficient? I don't think so.
Agreed, but
Jan Wieck janwi...@yahoo.com wrote:
On 5/26/2010 4:34 PM, Kevin Grittner wrote:
My latest idea for handling this in WAL-based replication
involves WAL-logging information about the transaction through
which a the committing transaction makes it safe to view. There
are a few options here at
On Wed, May 26, 2010 at 5:38 PM, Greg Stark gsst...@mit.edu wrote:
How about just API generalities? Like, where do you need this data, on
the master or on the slave? Would PGXC like it on the transaction
coordinator?
What question do you need to answer, do you need to pull out sets of
On 5/27/2010 9:59 AM, Greg Stark wrote:
On Wed, May 26, 2010 at 5:38 PM, Greg Stark gsst...@mit.edu wrote:
How about just API generalities? Like, where do you need this data, on
the master or on the slave? Would PGXC like it on the transaction
coordinator?
What question do you need to answer,
On 5/27/2010 12:01 PM, Jan Wieck wrote:
On 5/27/2010 9:59 AM, Greg Stark wrote:
This thread has been hard to follow for me. Were any of these
questions answered?
Yes.
The thing missing is any sort of answer to that problem description.
Jan
--
Anyone who trades liberty for security
On 5/25/2010 3:18 PM, Kevin Grittner wrote:
Jan Wieck janwi...@yahoo.com wrote:
Have you ever looked at one of those queries, that Londiste or
Slony issue against the provider DB in order to get all the log
data that has been committed between two snapshots? Is that really
the best you can
On 5/26/2010 7:03 AM, Jan Wieck wrote:
To replicate from one consistent state to the next, the replication
system now selects all log rows between two snapshots. Between here
means it simulates MVCC visibility in the sense of that the writing
transaction was in progress when the first snapshot
On Sun, May 23, 2010 at 9:21 PM, Jan Wieck janwi...@yahoo.com wrote:
Each record of the Transaction Commit Info consists of
txid xci_transaction_id
timestamptz xci_begin_timestamp
timestamptz xci_commit_timestamp
int64 xci_total_rowcount
So I think
On 5/26/2010 10:04 AM, Greg Stark wrote:
On Sun, May 23, 2010 at 9:21 PM, Jan Wieck janwi...@yahoo.com wrote:
Each record of the Transaction Commit Info consists of
txid xci_transaction_id
timestamptz xci_begin_timestamp
timestamptz xci_commit_timestamp
int64
Jan Wieck wrote:
Since the actual row level change information and other event data is
found inside of regular tables, identified by TXID and sequence number,
I am pretty sure I want that data in a server-side query. What you are
proposing is to read the xid's and timestamps with an external
On Wed, May 26, 2010 at 11:43 AM, Steve Singer ssin...@ca.afilias.info wrote:
Jan Wieck wrote:
Since the actual row level change information and other event data is
found inside of regular tables, identified by TXID and sequence number, I am
pretty sure I want that data in a server-side
On 5/26/2010 10:04 AM, Greg Stark wrote:
Instead of discussing implementation I think you should start with the
API the replication system needs.
... but to answer that request, actually I don't even think we should be
discussing API specifics.
During PGCon, Marco Kreen, Jim Nasby and I
On Wed, May 26, 2010 at 5:10 PM, Jan Wieck janwi...@yahoo.com wrote:
... but to answer that request, actually I don't even think we should be
discussing API specifics.
How about just API generalities? Like, where do you need this data, on
the master or on the slave? Would PGXC like it on the
Could you generate the commit-order log by simply registering a commit
hook (RegisterXactCallback(XACT_EVENT_COMMIT)) that writes such a log
somewhere in the data directory? That would work with older versions
too, no server changes required.
It would not get called during recovery, but I
On 5/26/2010 1:17 PM, Heikki Linnakangas wrote:
Could you generate the commit-order log by simply registering a commit
hook (RegisterXactCallback(XACT_EVENT_COMMIT)) that writes such a log
somewhere in the data directory? That would work with older versions
too, no server changes required.
On 26/05/10 21:43, Jan Wieck wrote:
On 5/26/2010 1:17 PM, Heikki Linnakangas wrote:
It would not get called during recovery, but I believe that would be
sufficient for Slony. You could always batch commits that you don't
know when they committed as if they committed simultaneously.
Here you
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
Perhaps I'm missing something, but I thought that Slony currently uses a
heartbeat, and all transactions committed between two beats are banged
together and committed as one in the slave so that their relative commit
order doesn't
On Wed, May 26, 2010 at 4:11 PM, Dimitri Fontaine
dfonta...@hi-media.com wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
Perhaps I'm missing something, but I thought that Slony currently uses a
heartbeat, and all transactions committed between two beats are banged
On 5/26/2010 3:16 PM, Heikki Linnakangas wrote:
On 26/05/10 21:43, Jan Wieck wrote:
On 5/26/2010 1:17 PM, Heikki Linnakangas wrote:
It would not get called during recovery, but I believe that would be
sufficient for Slony. You could always batch commits that you don't
know when they committed
Jan Wieck janwi...@yahoo.com wrote:
Without this logic, the replication system could not combine
multiple origin sessions into one replication session without
risking to never find a state, in which it can commit.
My latest idea for handling this in WAL-based replication involves
On 5/26/2010 4:11 PM, Dimitri Fontaine wrote:
So even ordering the txid and txid_snapshots with respect to WAL commit
time (LSN) won't be the whole story, for any given transaction
containing more than one event we also need to have them in order. I
know Jan didn't forget about it so it must
On 26/05/10 23:32, Jan Wieck wrote:
Consider two transactions A and B that due to transaction batching
between snapshots get applied together. Let the order of actions be
1. A starts
2. B starts
3. B selects a row for update, then updates the row
4. A tries to do the same and blocks
5. B
On 5/26/2010 12:38 PM, Greg Stark wrote:
On Wed, May 26, 2010 at 5:10 PM, Jan Wieck janwi...@yahoo.com wrote:
... but to answer that request, actually I don't even think we should be
discussing API specifics.
How about just API generalities? Like, where do you need this data, on
the master
On 26/05/10 23:45, Heikki Linnakangas wrote:
On 26/05/10 23:32, Jan Wieck wrote:
Consider two transactions A and B that due to transaction batching
between snapshots get applied together. Let the order of actions be
1. A starts
2. B starts
3. B selects a row for update, then updates the row
4.
On 26/05/10 23:49, Jan Wieck wrote:
In this implementation it wouldn't even matter if a transaction that was
recorded actually never made it because it crashed before the WAL flush.
It would be reported by this commit order feature, but there would be
no traces of whatever it did to be found
On 5/26/2010 4:52 PM, Heikki Linnakangas wrote:
Ok, I think I understand it now. The commit order is enough, because
replaying the actions in the order all actions of B, then all actions
of A yields the same result.
Precisely.
Jan
--
Anyone who trades liberty for security deserves neither
On 5/26/2010 5:12 PM, Heikki Linnakangas wrote:
On 26/05/10 23:49, Jan Wieck wrote:
In this implementation it wouldn't even matter if a transaction that was
recorded actually never made it because it crashed before the WAL flush.
It would be reported by this commit order feature, but there
On 5/26/2010 4:34 PM, Kevin Grittner wrote:
Jan Wieck janwi...@yahoo.com wrote:
Without this logic, the replication system could not combine
multiple origin sessions into one replication session without
risking to never find a state, in which it can commit.
My latest idea for handling this
On May 25, 2010, at 3:21 , Tom Lane wrote:
Florian Pflug f...@phlo.org writes:
The subtle point here is whether you consider the view from the outside
(in the sense of what a read-only transaction started at an arbitrary time
can or cannot observe), or from the inside (what updating
2010/5/25 Dan Ports d...@csail.mit.edu:
On Mon, May 24, 2010 at 10:24:07AM -0500, Kevin Grittner wrote:
Replicating or recreating the whole predicate locking and conflict
detection on slaves is not feasible for performance reasons. (I
won't elaborate unless someone feels that's not
On Sun, 2010-05-23 at 16:21 -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become visible.
This information could theoretically be extracted from the WAL, but
scanning the entire WAL
On Tue, May 25, 2010 at 02:00:42PM +0200, Nicolas Barbier wrote:
I don't understand the problem. According to me, in the context of
SSI, a read-only slave can just map SERIALIZABLE to the technical
implementation of REPEATABLE READ (i.e., the currently-existing
SERIALIZABLE). The union of the
On May 25, 2010, at 20:18 , Dan Ports wrote:
On Tue, May 25, 2010 at 02:00:42PM +0200, Nicolas Barbier wrote:
I don't understand the problem. According to me, in the context of
SSI, a read-only slave can just map SERIALIZABLE to the technical
implementation of REPEATABLE READ (i.e., the
Florian Pflug f...@phlo.org wrote:
Hm, so in fact SSI sometimes allows the database to be
inconsistent, but only as long as nobody tries to observe it?
Not exactly. The eventually-persisted state is always consistent,
but there can be a transitory committed state which would violate
On Tue, May 25, 2010 at 08:35:44PM +0200, Florian Pflug wrote:
Hm, so in fact SSI sometimes allows the database to be inconsistent, but only
as long as nobody tries to observe it?
Yes. Note that even while it's in an inconsistent state, you can still
perform any query that doesn't observe the
Jan Wieck janwi...@yahoo.com wrote:
Have you ever looked at one of those queries, that Londiste or
Slony issue against the provider DB in order to get all the log
data that has been committed between two snapshots? Is that really
the best you can think of?
No, I admit I haven't. In fact,
2010/5/25 Dan Ports d...@csail.mit.edu:
On Tue, May 25, 2010 at 02:00:42PM +0200, Nicolas Barbier wrote:
I don't understand the problem. According to me, in the context of
SSI, a read-only slave can just map SERIALIZABLE to the technical
implementation of REPEATABLE READ (i.e., the
On May 25, 2010, at 20:48 , Dan Ports wrote:
On Tue, May 25, 2010 at 08:35:44PM +0200, Florian Pflug wrote:
Hm, so in fact SSI sometimes allows the database to be inconsistent, but
only as long as nobody tries to observe it?
Yes. Note that even while it's in an inconsistent state, you can
2010/5/25 Florian Pflug f...@phlo.org:
On May 25, 2010, at 20:18 , Dan Ports wrote:
T3, which is a read-only transaction, sees the incremented date and an
empty list of receipts. But T1 later commits a new entry in the
receipts table with the old date. No serializable ordering allows this.
2010/5/25 Florian Pflug f...@phlo.org:
Hm, but for there to be an actual problem (and not a false positive), an
actual dangerous circle has to exist in the dependency graph. The
existence of a dangerous structure is just a necessary (but not
sufficient) and easily checked-for condition for
Florian Pflug f...@phlo.org wrote:
Hm, but for there to be an actual problem (and not a false
positive), an actual dangerous circle has to exist in the
dependency graph. The existence of a dangerous structure is just a
necessary (but not sufficient) and easily checked-for condition
for
On 5/24/2010 9:30 AM, Greg Sabino Mullane wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
In light of the proposed purging scheme, how would it be able to distinguish
between those two cases (nothing there yet vs. was there but purged)?
There is a difference between an empty
Jan Wieck janwi...@yahoo.com writes:
No, I meant how will the *function* know, if a superuser and/or some
background process can purge records at any time?
The data contains timestamps which are supposedly taken in commit order.
You can *not* rely on the commit timestamps to be in exact
Robert Haas robertmh...@gmail.com wrote:
maybe we should get serializable working and committed on one
node first and then worry about how to distribute it. I think
there might be other approaches to this problem
Well, I've got two or three other ideas on how we can manage this
for HS, but
On 5/25/2010 12:03 PM, Simon Riggs wrote:
On Sun, 2010-05-23 at 16:21 -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become visible.
This information could theoretically be extracted from
On Tue, 2010-05-25 at 16:41 -0400, Jan Wieck wrote:
On 5/25/2010 12:03 PM, Simon Riggs wrote:
On Sun, 2010-05-23 at 16:21 -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become
On 5/25/2010 4:50 PM, Simon Riggs wrote:
On Tue, 2010-05-25 at 16:41 -0400, Jan Wieck wrote:
On 5/25/2010 12:03 PM, Simon Riggs wrote:
On Sun, 2010-05-23 at 16:21 -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the
On 5/25/2010 4:16 PM, Tom Lane wrote:
Jan Wieck janwi...@yahoo.com writes:
No, I meant how will the *function* know, if a superuser and/or some
background process can purge records at any time?
The data contains timestamps which are supposedly taken in commit order.
You can *not* rely on
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
In light of the proposed purging scheme, how would it be able to distinguish
between those two cases (nothing there yet vs. was there but purged)?
There is a difference between an empty result set and an exception.
No, I meant how will
On Sun, May 23, 2010 at 9:44 PM, Jan Wieck janwi...@yahoo.com wrote:
I'm not sure the retention policies of the shared buffer cache, the WAL
buffers, CLOG buffers and every other thing we try to cache are that easy to
fold into one single set of logic. But I'm all ears.
I'm not sure either,
Jan Wieck wrote:
In some systems (data warehousing, replication), the order of
commits is important, since that is the order in which changes
have become visible.
This issue intersects with the serializable work I've been doing.
While in database transactions using S2PL the above is true,
On Mon, May 24, 2010 at 11:24 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Jan Wieck wrote:
In some systems (data warehousing, replication), the order of
commits is important, since that is the order in which changes
have become visible.
This issue intersects with the serializable
Robert Haas wrote:
I think you're confusing two subtly different things.
The only thing I'm confused about is what benefit anyone expects to
get from looking at data between commits in some way other than our
current snapshot mechanism. Can someone explain a use case where
what Jan is
On 24/05/10 19:51, Kevin Grittner wrote:
The only thing I'm confused about is what benefit anyone expects to
get from looking at data between commits in some way other than our
current snapshot mechanism. Can someone explain a use case where
what Jan is proposing is better than snapshot
On Mon, May 24, 2010 at 12:51 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas wrote:
I think you're confusing two subtly different things.
The only thing I'm confused about is what benefit anyone expects to
get from looking at data between commits in some way other than our
Excerpts from Robert Haas's message of dom may 23 20:38:14 -0400 2010:
On Sun, May 23, 2010 at 4:21 PM, Jan Wieck janwi...@yahoo.com wrote:
The system will have postgresql.conf options for enabling/disabling the
whole shebang, how many shared buffers to allocate for managing access
to the
On Mon, May 24, 2010 at 3:07 PM, Alvaro Herrera alvhe...@alvh.no-ip.org wrote:
It'd be nice to have as a side effect, but if not, IMHO this proposal
could simply use a fixed buffer pool like all other slru.c callers until
someone gets around to fixing that. Adding more GUC switches for this
On Sun, May 23, 2010 at 04:21:58PM -0400, Jan Wieck wrote:
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become visible.
This information could theoretically be extracted from the WAL, but
scanning the entire
Robert Haas wrote:
It's a tool for replication solutions to use.
I was thrown by the original post referencing data warehousing.
For replication I definitely see that it would be good to provide
some facility to grab a coherent snapshot out of the transaction
stream, but I'm still not clear
On Mon, May 24, 2010 at 4:03 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas wrote:
It's a tool for replication solutions to use.
I was thrown by the original post referencing data warehousing.
For replication I definitely see that it would be good to provide
some
On 5/24/2010 12:51 PM, Kevin Grittner wrote:
Robert Haas wrote:
I think you're confusing two subtly different things.
The only thing I'm confused about is what benefit anyone expects to
get from looking at data between commits in some way other than our
current snapshot mechanism. Can
On 5/24/2010 3:10 PM, Dan Ports wrote:
I'm not clear on why the total rowcount is useful, but perhaps I'm
missing something obvious.
It is a glimpse into the future. Several years of pain doing replication
work has taught me that knowing approximately who much work the next
chunk will be
On Mon, May 24, 2010 at 10:24:07AM -0500, Kevin Grittner wrote:
Jan Wieck wrote:
In some systems (data warehousing, replication), the order of
commits is important, since that is the order in which changes
have become visible.
This issue intersects with the serializable work I've
On May 25, 2010, at 0:42 , Dan Ports wrote:
On Mon, May 24, 2010 at 10:24:07AM -0500, Kevin Grittner wrote:
Jan Wieck wrote:
In some systems (data warehousing, replication), the order of
commits is important, since that is the order in which changes
have become visible.
This issue
Florian Pflug f...@phlo.org writes:
The subtle point here is whether you consider the view from the outside (in
the sense of what a read-only transaction started at an arbitrary time can or
cannot observe), or from the inside (what updating transactions can observe
and might base their
In some systems (data warehousing, replication), the order of commits is
important, since that is the order in which changes have become visible.
This information could theoretically be extracted from the WAL, but
scanning the entire WAL just to extract this tidbit of information would
be
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Exposing the data will be done via a set returning function. The SRF
takes two arguments. The maximum number of rows to return and the last
serial number processed by the reader. The advantage of such SRF is that
the result can be used in a
On Sun, May 23, 2010 at 4:21 PM, Jan Wieck janwi...@yahoo.com wrote:
The system will have postgresql.conf options for enabling/disabling the
whole shebang, how many shared buffers to allocate for managing access
to the data and to define the retention period of the data based on data
volume
1 - 100 of 102 matches
Mail list logo