On Fri, Dec 24, 2010 at 06:37:26PM -0500, Andrew Dunstan wrote:
On 12/24/2010 06:26 PM, Aidan Van Dyk wrote:
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drakej...@commandprompt.com
wrote:
I would have to agree here. The idea that we have to search email
is bad enough (issue/bug/feature
On Mon, Dec 6, 2010 at 7:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Josh Berkus j...@agliodbs.com writes:
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of
Robert Haas wrote:
I actually think that the phrase this has been discussed before and
rejected should be permanently removed from our list of excuses for
rejecting a patch. Or if we must use that excuse, then I think a link
to the relevant discussion is a must, and the relevant discussion
anwhile is X.
Agreed. Perhaps we need an anti-TODO that lists things we don't want in
more detail. The TODO has that for a few items, but scaling things up
there will be cumbersome.
Well there is a problem with this too. A good example is hints. A lot of
the community wants hints. A lot
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake j...@commandprompt.com wrote:
I would have to agree here. The idea that we have to search email is bad
enough (issue/bug/feature tracker anyone?) but to have someone say,
search the archives? That is just plain rude and anti-community.
Saying
On 12/24/2010 06:26 PM, Aidan Van Dyk wrote:
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drakej...@commandprompt.com wrote:
I would have to agree here. The idea that we have to search email is bad
enough (issue/bug/feature tracker anyone?) but to have someone say,
search the archives? That
On Fri, 2010-12-24 at 18:26 -0500, Aidan Van Dyk wrote:
On Fri, Dec 24, 2010 at 2:48 PM, Joshua D. Drake j...@commandprompt.com
wrote:
I would have to agree here. The idea that we have to search email is bad
enough (issue/bug/feature tracker anyone?) but to have someone say,
search the
On Dec 24, 2010, at 10:52 AM, Bruce Momjian br...@momjian.us wrote:
Agreed. Perhaps we need an anti-TODO that lists things we don't want in
more detail. The TODO has that for a few items, but scaling things up
there will be cumbersome.
I don't really think that'd be much better. What might
On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki koichi@gmail.com wrote:
This is what Postgres-XC is doing between a coordinator and a
datanode. Coordinator may correspond to poolers/loadbalancers.
Does anyone think it makes sense to extract XC implementation of
snapshot shipping to
Robert;
Thank you very much for your advice. Indeed, I'm considering to
change the license to PostgreSQL's one. It may take a bit more
though...
--
Koichi Suzuki
2010/12/15 Robert Haas robertmh...@gmail.com:
On Tue, Dec 7, 2010 at 3:23 AM, Koichi Suzuki koichi@gmail.com wrote:
On Tue, Dec 14, 2010 at 7:06 PM, Koichi Suzuki koichi@gmail.com wrote:
Thank you very much for your advice. Indeed, I'm considering to
change the license to PostgreSQL's one. It may take a bit more
though...
You wouldn't necessarily need to relicense all of Postgres-XC
(although that
This is what Postgres-XC is doing between a coordinator and a
datanode.Coordinator may correspond to poolers/loadbalancers.
Does anyone think it makes sense to extract XC implementation of
snapshot shipping to PostgreSQL itself?
Cheers;
--
Koichi Suzuki
2010/12/7 Stefan
On 12/07/2010 09:23 AM, Koichi Suzuki wrote:
This is what Postgres-XC is doing between a coordinator and a
datanode.Coordinator may correspond to poolers/loadbalancers.
Does anyone think it makes sense to extract XC implementation of
snapshot shipping to PostgreSQL itself?
well if there
On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 06.12.2010 02:55, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if
On 06.12.2010 14:57, Robert Haas wrote:
On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
The client doesn't need to know anything about the snapshot blob that the
server gives it. It just needs to pass it back to the server through the
other
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 06.12.2010 14:57, Robert Haas wrote:
On Mon, Dec 6, 2010 at 2:29 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
The client doesn't need to know anything about the snapshot
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
for my taste. It might not be as bad as the expression tree stuff,
but there's nothing particularly good about it either.
Note that we already have txid_current_snapshot() function,
On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
for my taste. It might not be as bad as the expression tree stuff,
but there's nothing
On 12/06/2010 10:22 AM, Robert Haas wrote:
On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much like exposing the server's guts
for my taste. It might not be as bad as the
On Mon, Dec 6, 2010 at 10:35 AM, Andrew Dunstan and...@dunslane.net wrote:
On 12/06/2010 10:22 AM, Robert Haas wrote:
On Mon, Dec 6, 2010 at 9:58 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 06.12.2010 15:53, Robert Haas wrote:
I guess. It still seems far too much
Robert Haas robertmh...@gmail.com writes:
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Well, then you need some sort of cross-backend communication, which is
always a bit clumsy.
A temp file seems quite sufficient, and not at all difficult.
On Mon, Dec 6, 2010 at 10:40 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Well, then you need some sort of cross-backend communication, which is
always a bit
On 12/06/2010 10:40 AM, Tom Lane wrote:
Robert Haasrobertmh...@gmail.com writes:
On Mon, Dec 6, 2010 at 9:45 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Well, then you need some sort of cross-backend communication, which is
always a bit clumsy.
A temp file seems
Andrew Dunstan and...@dunslane.net writes:
Yeah. I'm still not convinced that using shared memory is a bad way to
pass these around. Surely we're not talking about large numbers of them.
What am I missing here?
They're not of a very predictable size.
Robert's idea of publish() returning a
Andrew Dunstan and...@dunslane.net writes:
Why not just say give me the snapshot currently held by process ?
There's not a unique snapshot held by a particular process. Also, we
don't want to expend the overhead to fully publish every snapshot.
I think it's really necessary that the sending
On 12/06/2010 12:28 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
Yeah. I'm still not convinced that using shared memory is a bad way to
pass these around. Surely we're not talking about large numbers of them.
What am I missing here?
They're not of a very predictable size.
Tom Lane t...@sss.pgh.pa.us wrote:
I'm still not convinced that using shared memory is a bad way to
pass these around. Surely we're not talking about large numbers
of them. What am I missing here?
They're not of a very predictable size.
Surely you can predict that any snapshot is no
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
I'm still not convinced that using shared memory is a bad way to
pass these around. Surely we're not talking about large numbers
of them. What am I missing here?
They're not of a very predictable size.
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Surely you can predict that any snapshot is no larger than a fairly
small fixed portion plus sizeof(TransactionId) * MaxBackends?
No. See subtransactions.
Subtransactions are included in snapshots?
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
No. See subtransactions.
Subtransactions are included in snapshots?
Sure, see GetSnapshotData(). You could avoid it by setting
suboverflowed, but that comes at a nontrivial performance cost.
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
No. See subtransactions.
Subtransactions are included in snapshots?
Sure, see GetSnapshotData(). You could avoid it by setting
suboverflowed, but that comes at a
On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
IIRC, in old discussions of this problem we first considered allowing
clients to pull down an explicit representation of their snapshot (which
actually is an existing feature now, txid_current_snapshot()) and then
upload that
On 06.12.2010 21:48, marcin mank wrote:
On Sun, Dec 5, 2010 at 7:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
IIRC, in old discussions of this problem we first considered allowing
clients to pull down an explicit representation of their snapshot (which
actually is an existing feature now,
marcin mank marcin.m...@gmail.com writes:
On Sun, Dec 5, 2010 at 7:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
IIRC, in old discussions of this problem we first considered allowing
clients to pull down an explicit representation of their snapshot (which
actually is an existing feature now,
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of a supporting use-case for
worrying about this. When would you really need to be able to do it?
If you had
Josh Berkus j...@agliodbs.com writes:
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of a supporting use-case for
worrying about this. When would you really
We may need other means to ensure that the snapshot is available on
the slave. It could be a bit too early to use the snapshot on the
slave depending upon the delay of WAL replay.
--
Koichi Suzuki
2010/12/7 Tom Lane t...@sss.pgh.pa.us:
marcin mank marcin.m...@gmail.com writes:
On
On 12/07/2010 01:22 AM, Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of a supporting use-case for
On 12/07/2010 01:22 AM, Tom Lane wrote:
Josh Berkus j...@agliodbs.com writes:
However, if you were doing something like parallel pg_dump you could
just run the parent and child instances all against the slave, so the
pg_dump scenario doesn't seem to offer much of a supporting use-case for
Joachim Wieland wrote:
Regarding snapshot cloning and dump consistency, I brought this up
already several months ago and asked if the feature is considered
useful even without snapshot cloning.
In addition, Joachim submitted a synchronized snapshot patch that looks
to me like it slipped
Greg Smith g...@2ndquadrant.com writes:
In addition, Joachim submitted a synchronized snapshot patch that looks
to me like it slipped through the cracks without being fully explored.
...
The way I read that thread, there were two objections:
1) This mechanism isn't general enough for all
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The trick seems to be to apply
sufficient
On 12/05/2010 08:55 PM, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges.
On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan and...@dunslane.net wrote:
Why not just say give me the snapshot currently held by process ?
And please, not temp files if possible.
As far as I'm aware, the full snapshot doesn't normally exist in
shared memory, hence the need for publication
On Sun, Dec 5, 2010 at 9:27 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 9:04 PM, Andrew Dunstan and...@dunslane.net wrote:
Why not just say give me the snapshot currently held by process ?
And please, not temp files if possible.
As far as I'm aware, the full
Thank you Joachim;
Yes, and the current patch requires the original (publisher)
transaction is alive to prevent RecentXmin updated.
I hope this restriction is acceptable if publishing/subscribing is
provided via functions, not statements.
Cheers;
--
Koichi Suzuki
2010/12/6 Joachim
On 06.12.2010 02:55, Robert Haas wrote:
On Sun, Dec 5, 2010 at 1:28 PM, Tom Lanet...@sss.pgh.pa.us wrote:
I'm wondering if we should reconsider the pass-it-through-the-client
approach, because if we could make that work it would be more general and
it wouldn't need any special privileges. The
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andrew Dunstan and...@dunslane.net writes:
Umm, nobody has attributed ridiculousness to anyone. Please don't put
words in my mouth. But I think this is a perfectly reasonable discussion
to have. Nobody gets to come along and
On 12/02/2010 11:44 PM, Joachim Wieland wrote:
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lanet...@sss.pgh.pa.us wrote:
In particular, this issue *has* been discussed before, and there was a
consensus that preserving dump consistency was a requirement. I don't
think that Joachim gets to bypass
On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstan and...@dunslane.net wrote:
I think Josh Berkus' comments in the thread you mentioned are correct:
Actually, I'd say that there's a broad set of cases of people who want
to do a parallel pg_dump while their system is active. Parallel pg_dump
on a
On 12/03/2010 11:23 AM, Robert Haas wrote:
On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstanand...@dunslane.net wrote:
I think Josh Berkus' comments in the thread you mentioned are correct:
Actually, I'd say that there's a broad set of cases of people who want
to do a parallel pg_dump while
On Fri, Dec 3, 2010 at 11:40 AM, Andrew Dunstan and...@dunslane.net wrote:
On 12/03/2010 11:23 AM, Robert Haas wrote:
On Fri, Dec 3, 2010 at 8:02 AM, Andrew Dunstanand...@dunslane.net
wrote:
I think Josh Berkus' comments in the thread you mentioned are correct:
Actually, I'd say that
Excerpts from Robert Haas's message of vie dic 03 13:56:32 -0300 2010:
I know the use cases are limited, but I think it's still useful on its own.
I don't understand what's so difficult about starting with the snapshot
cloning patch. AFAIR it's already been written anyway, no?
--
Álvaro
On 12/03/2010 12:17 PM, Alvaro Herrera wrote:
Excerpts from Robert Haas's message of vie dic 03 13:56:32 -0300 2010:
I know the use cases are limited, but I think it's still useful on its own.
I don't understand what's so difficult about starting with the snapshot
cloning patch. AFAIR it's
On 02.12.2010 07:39, Joachim Wieland wrote:
On Sun, Nov 14, 2010 at 6:52 PM, Joachim Wielandj...@mcknight.de wrote:
You would add a regular parallel dump with
$ pg_dump -j 4 -Fd -f out.dir dbname
So this is an updated series of patches for my parallel pg_dump WIP
patch. Most importantly it
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the multiple-target-dirs
feature?
On Thu, Dec 2, 2010 at 6:19 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the
Joachim Wieland j...@mcknight.de writes:
A guy called Dimitri Fontaine actually proposed the
serveral-directories feature here and other people liked the idea.
Hehe :)
Reading that now, it could be that I didn't know at the time that given
a powerful enough subsystem disk there's no way to
On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:
So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.
I think it will complicate this feature unnecessarily for 9.1.
On 12/02/2010 12:56 PM, Josh Berkus wrote:
On 12/02/2010 05:50 AM, Dimitri Fontaine wrote:
So the use case of parralel dump in a bunch or user given locations
would be to use different mount points (disk subsystems) at the same
time. Not sure how releveant it is.
I think it will complicate
Now, if only I could think of some way to write a parallel dump to a
set of pipes, I'd be in heaven.
The only way I can see that working sanely would be to have a program
gathering stuff at the other end of the pipes, and ensuring it was all
coherent. That would be a huge growth in scope
On Thu, Dec 2, 2010 at 12:56 PM, Josh Berkus j...@agliodbs.com wrote:
Now, if only I could think of some way to write a parallel dump to a set of
pipes, I'd be in heaven.
What exactly are you trying to accomplish with the pipes?
Joachim
--
Sent via pgsql-hackers mailing list
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
That's a big patch..
Not nearly big enough :-(
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I
On 12/02/2010 05:01 PM, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
That's a big patch..
Not nearly big enough :-(
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
Andrew Dunstan and...@dunslane.net writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why
Dimitri Fontaine wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I don't see the point of the sort-by-relpages code. The order the objects
are dumped should be irrelevant, as long as you obey the restrictions
dictated by dependencies. Or is it only needed for the
On 12/02/2010 05:32 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
connections are used to read
On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Andrew Dunstan and...@dunslane.net writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to assure a consistent dump if different
On 12/02/2010 07:13 PM, Robert Haas wrote:
On Thu, Dec 2, 2010 at 5:32 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Andrew Dunstanand...@dunslane.net writes:
On 12/02/2010 05:01 PM, Tom Lane wrote:
In the past, proposals for this have always been rejected on the grounds
that it's impossible to
On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstan and...@dunslane.net wrote:
In the past, proposals for this have always been rejected on the
grounds
that it's impossible to assure a consistent dump if different
connections are used to read different tables. I fail to understand
why that
On 12/02/2010 07:48 PM, Robert Haas wrote:
On Thu, Dec 2, 2010 at 7:21 PM, Andrew Dunstanand...@dunslane.net wrote:
In the past, proposals for this have always been rejected on the
grounds
that it's impossible to assure a consistent dump if different
connections are used to read different
On Dec 2, 2010, at 8:11 PM, Andrew Dunstan and...@dunslane.net wrote:
Umm, nobody has attributed ridiculousness to anyone. Please don't put words
in my mouth. But I think this is a perfectly reasonable discussion to have.
Nobody gets to come along and get the features they want without some
Andrew Dunstan and...@dunslane.net writes:
On 12/02/2010 05:32 PM, Tom Lane wrote:
(I'm not actually convinced that snapshot cloning is the only problem
here; locking could be an issue too, if there are concurrent processes
trying to take locks that will conflict with pg_dump's. But the
On 12/02/2010 09:09 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 12/02/2010 05:32 PM, Tom Lane wrote:
(I'm not actually convinced that snapshot cloning is the only problem
here; locking could be an issue too, if there are concurrent processes
trying to take locks that
Andrew Dunstan and...@dunslane.net writes:
Umm, nobody has attributed ridiculousness to anyone. Please don't put
words in my mouth. But I think this is a perfectly reasonable discussion
to have. Nobody gets to come along and get the features they want
without some sort of consensus, not me,
Andrew Dunstan and...@dunslane.net writes:
On 12/02/2010 09:09 PM, Tom Lane wrote:
Now, process 3 is blocked behind process 2 is blocked behind process 1
which is waiting for 3 to complete. Can you say undetectable deadlock?
Hmm. Yeah. Maybe we could get around it if we prefork the workers
On 12/02/2010 09:41 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 12/02/2010 09:09 PM, Tom Lane wrote:
Now, process 3 is blocked behind process 2 is blocked behind process 1
which is waiting for 3 to complete. Can you say undetectable deadlock?
Hmm. Yeah. Maybe we could
On Thu, Dec 2, 2010 at 9:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
In particular, this issue *has* been discussed before, and there was a
consensus that preserving dump consistency was a requirement. I don't
think that Joachim gets to bypass that decision just by submitting a
patch that
78 matches
Mail list logo