On Fri, Jun 17, 2011 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Department of second thoughts: I think I see a problem.
Um, yeah, so that doesn't really work any better than my idea.
On further reflection, there's a problem at a higher level than
Excerpts from Tom Lane's message of vie jun 17 13:22:40 -0400 2011:
With this approach, we would have no serialization anomalies from single
transactions committing while a scan is in progress. There could be
anomalies resulting from considering an earlier XID to be in-progress
while a later
Robert Haas robertmh...@gmail.com writes:
I like this feature a lot, but it's hard to imagine that any of the
fixes anyone has so far suggested can be implemented without
collateral damage. Nor is there any certainty that this is the last
bug.
And in fact, here's something else to worry
Alvaro Herrera alvhe...@commandprompt.com writes:
Hmm, would there be a problem if a scan on catalog A yields results from
supposedly-running transaction X but another scan on catalog B yields
result from transaction Y? (X != Y) For example, a scan on pg_class
says that there are N triggers
Excerpts from Tom Lane's message of vie jun 17 17:08:25 -0400 2011:
Alvaro Herrera alvhe...@commandprompt.com writes:
Hmm, would there be a problem if a scan on catalog A yields results from
supposedly-running transaction X but another scan on catalog B yields
result from transaction Y? (X
On Fri, Jun 17, 2011 at 5:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
As far as I can see, the only simple way to return pg_dump to its
previous level of safety while retaining this patch is to make it take
ShareUpdateExclusiveLocks, so that it will still block all forms of
ALTER TABLE. This is
Robert Haas robertmh...@gmail.com writes:
I have been thinking for a while now that it would be sensible to make
vacuum use a different lock type, much as we do for relation
extension.
Hmm. I had just been toying with the idea of introducing a new
user-visible locking level to allow
On Thu, Jun 16, 2011 at 6:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
4. Backend #2 visits the new, about-to-be-committed version of
pgbench_accounts' pg_class row just before backend #3 commits.
It sees the row as not good and keeps scanning. By the time it
reaches the previous version of the
Robert Haas robertmh...@gmail.com writes:
On Thu, Jun 16, 2011 at 6:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
4. Backend #2 visits the new, about-to-be-committed version of
pgbench_accounts' pg_class row just before backend #3 commits.
It sees the row as not good and keeps scanning. By the
If you set up a pgbench test case that hits the database with a lot of
concurrent selects and non-exclusive-locking ALTER TABLEs, 9.1 soon
falls over. For example:
$ cat foo.script
alter table pgbench_accounts set (fillfactor = 100);
SELECT abalance FROM pgbench_accounts WHERE aid = 525212;
$
On Thu, Jun 16, 2011 at 11:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
In typical cases where both versions of the row are on the same page,
the window for the concurrent commit to happen is very narrow --- that's
why you need so many clients to make it happen easily. With enough
clients
On Thu, Jun 16, 2011 at 6:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I believe that this is fundamentally unavoidable so long as we use
SnapshotNow to read catalogs --- which is something we've talked about
changing, but it will require a pretty major RD effort to make it
happen. In the
201 - 212 of 212 matches
Mail list logo