In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this purpose (xmin is 32-bit txid type but txid_current() returns
64-bit bigint).
CREATE FUNCTION test_trigger()
RETURNS TRIGGER AS $$
BEGIN
Hi All,
I briefly mentioned this idea in one of the other thread, but starting
a new thread to highlight the point. Today, we set the visibility map
*only* in the first phase of vacuum. This works when the page has no
dead tuples. But the vacuum itself is removing one or more dead tuples
from the
Tom Lane t...@sss.pgh.pa.us writes:
CREATE TEMPLATE yadda;
ALTER TEMPLATE yadda ADD FILE 'yadda--1.0.sql' CONTENT $$...$$;
FWIW, the more I think about it the more I like the notion of treating
extension templates as a separate kind of object. I do see value in
storing them inside the
* Tom Lane wrote:
We really need to scare up another buildfarm member to run with
-DCLOBBER_CACHE_ALWAYS, now that jaguar has stopped doing so.
I would be happy to do that on jaguarundi, in exchange for dialing down
the build frequency from hourly to something a bit less ambitious. That
On Thursday, November 22, 2012 3:00 AM Greg Smith wrote:
On 11/16/12 9:03 AM, Merlin Moncure wrote:
Atri ran some quick n dirty tests to see if there
were any regressions. He benched a large scan followed by vacuum. So
far, results are inconclusive so better testing methodologies will
On Mon, Sep 10, 2012 at 10:00 PM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
On Fri, Sep 7, 2012 at 6:06 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
That makes sense to me. The reason I didn't make that change when I
added the serializable special case to pg_dump was that it
On Thursday, December 06, 2012 9:35 AM Kyotaro HORIGUCHI wrote:
Hello, I have a problem with PostgreSQL 9.2 with Pacemaker.
HA standby sometime failes to start under normal operation.
Testing with a bare replication pair showed that the standby failes
startup recovery under the operation
On 6 December 2012 00:43, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Tue, Dec 4, 2012 at 3:38 PM, Jeff Davis pg...@j-davis.com wrote:
After reading that thread, I still don't understand why it's unsafe to
set HEAP_XMIN_COMMITTED in those conditions. Even if
On 4 December 2012 20:38, Jeff Davis pg...@j-davis.com wrote:
The simple case of BEGIN; CREATE TABLE ...; COPY ... WITH (FREEZE);
doesn't meet the pre-conditions. It only meets the conditions if
preceded by a TRUNCATE, which all of the tests do. I looked into it, and
I think the test:
...
Hi,
On 2012-12-03 17:34:01 -0800, Jeff Davis wrote:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8de72b66a2edcf12c812de0a73bd50b6b7d81d62
On the subject of that patch. I am not a big fan of only emitting a NOTICE if
FREEZE wasn't properly used:
+ if (cstate-freeze
On Thursday, December 06, 2012 12:53 AM Heikki Linnakangas wrote:
On 05.12.2012 14:32, Amit Kapila wrote:
On Tuesday, December 04, 2012 10:01 PM Heikki Linnakangas wrote:
After some diversions to fix bugs and refactor existing code, I've
committed a couple of small parts of this patch,
On 6 December 2012 13:12, Andres Freund and...@2ndquadrant.com wrote:
Hi,
On 2012-12-03 17:34:01 -0800, Jeff Davis wrote:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8de72b66a2edcf12c812de0a73bd50b6b7d81d62
On the subject of that patch. I am not a big fan of only
On 2012-12-06 14:07:32 +, Simon Riggs wrote:
On 6 December 2012 13:12, Andres Freund and...@2ndquadrant.com wrote:
Hi,
On 2012-12-03 17:34:01 -0800, Jeff Davis wrote:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8de72b66a2edcf12c812de0a73bd50b6b7d81d62
On the
On 6 December 2012 14:12, Andres Freund and...@2ndquadrant.com wrote:
I remain unconvinced by that argument, but if I am alone with this
ok. Could we at least make it a WARNING? Nobody ever reads NOTICEs
because it contains so much noise. And this is isn't noise. Its a bug
on the client side.
Hello,
Although this may have to be posted to pgsql-bugs or pgsql-general, let me
ask you here because the problem probably needs PostgreSQL's code fix.
[Problem]
I'm using PostgreSQL 9.1.6 on Linux. I encountered a serious problem that
media recovery failed showing the following message:
On 12/06/2012 04:41 AM, Christian Ullrich wrote:
* Tom Lane wrote:
We really need to scare up another buildfarm member to run with
-DCLOBBER_CACHE_ALWAYS, now that jaguar has stopped doing so.
I would be happy to do that on jaguarundi, in exchange for dialing
down the build frequency from
On Thu, Dec 6, 2012 at 3:59 AM, Amit Kapila amit.kap...@huawei.com wrote:
On Thursday, November 22, 2012 3:00 AM Greg Smith wrote:
On 11/16/12 9:03 AM, Merlin Moncure wrote:
Atri ran some quick n dirty tests to see if there
were any regressions. He benched a large scan followed by vacuum.
David Rowley wrote:
If we wanted to see the sales per product we could write
something like this:
SELECT p.product_code,SUM(s.quantity)
FROM products p
INNER JOIN bigsalestable s ON p.productid = s.productid
GROUP BY p.product_code;
Though this plan might not be quite as optimal as it
Christian Ullrich ch...@chrullrich.net writes:
* Tom Lane wrote:
We really need to scare up another buildfarm member to run with
-DCLOBBER_CACHE_ALWAYS, now that jaguar has stopped doing so.
I would be happy to do that on jaguarundi, in exchange for dialing down
the build frequency from
Robert Haas wrote:
Jeff Davis pg...@j-davis.com wrote:
Or, I could write up a test framework in ruby or python, using
the appropriate pg driver, and some not-so-portable shell
commands to start and stop the server. Then, I can publish that
on this list, and that would at least make it easier
Hi,
I tried to address most (all?) your comments in the version from
http://archives.postgresql.org/message-id/20121204175212.GB12055%40awork2.anarazel.de
.
On 2012-11-15 11:31:55 -0500, Peter Eisentraut wrote:
+xlogdump: $(OBJS) $(shell find ../../backend ../../timezone -name
On 5 December 2012 23:37, David Rowley dgrowle...@gmail.com wrote:
Though this plan might not be quite as optimal as it could be as it performs
the grouping after the join.
PostgreSQL always calculates aggregation as the last step.
It's a well known optimisation to push-down GROUP BY clauses
MauMau wrote:
[Problem]
I'm using PostgreSQL 9.1.6 on Linux. I encountered a serious
problem that media recovery failed showing the following message:
FATAL: archive file 000100800028 has wrong size:
7340032 instead of 16777216
I'm using normal cp command to archive WAL
* Robert Haas (robertmh...@gmail.com) wrote:
I haven't looked at the committed patch - which seemed a bit
Disclaimer- neither have I, but..
When I last recall this discussion (likely in some bar in Europe), the
problem was also that an independent session would be able to:
a) see that the
* Simon Riggs (si...@2ndquadrant.com) wrote:
It's not a bug. Requesting a useful, but not critical optimisation is
just a hint. The preconditions are not easy to understand, so I see no
reason to punish people that misunderstand, or cause programs to fail
in ways that need detailed
Simon Riggs si...@2ndquadrant.com writes:
On 5 December 2012 23:37, David Rowley dgrowle...@gmail.com wrote:
Though this plan might not be quite as optimal as it could be as it performs
the grouping after the join.
PostgreSQL always calculates aggregation as the last step.
It's a well known
MauMau maumau...@gmail.com writes:
I'm using PostgreSQL 9.1.6 on Linux. I encountered a serious problem that
media recovery failed showing the following message:
FATAL: archive file 000100800028 has wrong size: 7340032
instead of 16777216
Well, that's unfortunate, but it's not
On Wed, Dec 5, 2012 at 10:04 PM, Bruce Momjian br...@momjian.us wrote:
Pg_upgrade displays file names during copy and database names during
dump/restore. Andrew Dunstan identified three bugs:
* long file names were being truncated to 60 _leading_ characters, which
often do not change for
On Wed, Dec 5, 2012 at 10:16 PM, Jan Wieck janwi...@yahoo.com wrote:
On 12/5/2012 2:00 PM, Robert Haas wrote:
Many it'd be sensible to relate the retry time to the time spend
vacuuming the table. Say, if the amount of time spent retrying
exceeds 10% of the time spend vacuuming the table,
On Thu, Dec 6, 2012 at 12:31 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
2012/12/5 Tom Lane t...@sss.pgh.pa.us:
Asif Rehman asifr.reh...@gmail.com writes:
Here is the updated patch. I overlooked the loop, checking to free the
conversions map. Here are the results now.
I looked at this
On Thu, Dec 6, 2012 at 3:58 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current() for
this purpose (xmin is 32-bit txid type but txid_current() returns 64-bit
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this purpose (xmin is 32-bit txid type but txid_current() returns
64-bit bigint).
IF OLD.xmin
Robert Haas escribió:
On Wed, Dec 5, 2012 at 10:04 PM, Bruce Momjian br...@momjian.us wrote:
Pg_upgrade displays file names during copy and database names during
dump/restore. Andrew Dunstan identified three bugs:
* long file names were being truncated to 60 _leading_ characters, which
On Thu, 2012-12-06 at 11:55 -0500, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
When I last recall this discussion (likely in some bar in Europe), the
problem was also that an independent session would be able to:
a) see that the table exists (due to SnapshotNow being
On 6 December 2012 17:21, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On 5 December 2012 23:37, David Rowley dgrowle...@gmail.com wrote:
Though this plan might not be quite as optimal as it could be as it performs
the grouping after the join.
PostgreSQL
Tom Lane wrote:
In the case being presented here, it's not apparent to me that
there's any advantage to be had at all.
The OP reported a different plan which was twice as fast, although
showing EXPLAIN ANALYZE results for both would be nice confirmation
of that.
You still need to aggregate
Pavan Deolasee pavan.deola...@gmail.com writes:
So the idea that the patch implements is this. When we scan pages in
the first phase of vacuum, if we find a page that has all-visible
tuples but also has one or more dead tuples that we know the second
phase of vacuum will remove, we mark such
Tom Lane wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this purpose (xmin is 32-bit txid type but txid_current() returns
64-bit
On 6 December 2012 17:02, Stephen Frost sfr...@snowman.net wrote:
* Simon Riggs (si...@2ndquadrant.com) wrote:
It's not a bug. Requesting a useful, but not critical optimisation is
just a hint. The preconditions are not easy to understand, so I see no
reason to punish people that
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
Tom Lane t...@sss.pgh.pa.us writes:
I think a separate kind of extension template object would make a lot
more sense.
I'm on board now. We still have some questions to answer, and here's a
worked out design proposal for implementing my
On 6 December 2012 00:46, Robert Haas robertmh...@gmail.com wrote:
On Wed, Dec 5, 2012 at 6:45 PM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, but it is also the trigger writers problem.
Maybe to some degree. I don't think that a server crash or something
like a block-read error is ever
Alvaro Herrera alvhe...@2ndquadrant.com writes:
I think we should be setting the initial epoch to something other than
zero. That way, some quick testing would have revealed this problem
immediately.
Yeah, having initdb start the epoch at 1 doesn't seem unreasonable.
On 2012-12-06 18:21:09 +, Simon Riggs wrote:
On 6 December 2012 00:46, Robert Haas robertmh...@gmail.com wrote:
On Wed, Dec 5, 2012 at 6:45 PM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, but it is also the trigger writers problem.
Maybe to some degree. I don't think that a server
Kevin and Robert are well aware of most of the below. I just want to put
this out here so other people, who haven't followed the discussion too
closely, may chime in.
Some details on the problem:
First of all, there is a minimum number of 1000 pages that the vacuum
scan must detect as
On Thu, Dec 6, 2012 at 1:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I think taking a second whack at setting the visibility bit is a fine
idea, but let's drop all the rest of this premature optimization.
+1.
If there's any optimization needed here, we should try to do it by
remembering relevant
Robert Haas robertmh...@gmail.com writes:
On Thu, Dec 6, 2012 at 12:31 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:
but we can limit a implicit coercion in tupconvert via new parameter -
because we would to forward plpgsql behave just from this direction.
Then when this parameter - maybe
On 2012-12-06 15:08:51 -0300, Alvaro Herrera wrote:
Tom Lane wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this purpose (xmin is
On 6 December 2012 18:31, Andres Freund and...@2ndquadrant.com wrote:
On 2012-12-06 18:21:09 +, Simon Riggs wrote:
On 6 December 2012 00:46, Robert Haas robertmh...@gmail.com wrote:
On Wed, Dec 5, 2012 at 6:45 PM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, but it is also the trigger
On Thu, Dec 6, 2012 at 12:43:53PM -0500, Robert Haas wrote:
On Wed, Dec 5, 2012 at 10:04 PM, Bruce Momjian br...@momjian.us wrote:
Pg_upgrade displays file names during copy and database names during
dump/restore. Andrew Dunstan identified three bugs:
* long file names were being
On Thu, Dec 6, 2012 at 02:53:44PM -0300, Alvaro Herrera wrote:
Robert Haas escribió:
On Wed, Dec 5, 2012 at 10:04 PM, Bruce Momjian br...@momjian.us wrote:
Pg_upgrade displays file names during copy and database names during
dump/restore. Andrew Dunstan identified three bugs:
*
Robert Haas robertmh...@gmail.com writes:
One other thought: I'm wondering if we shouldn't try to push the work
of setting the all-visible bit into heap_page_prune().
Hm, maybe ...
But it seems to me that a page can't be all-visible unless there are
no dead line pointers and no HOT chains
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 15:08:51 -0300, Alvaro Herrera wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this
On 2012-12-06 18:42:22 +, Simon Riggs wrote:
On 6 December 2012 18:31, Andres Freund and...@2ndquadrant.com wrote:
On 2012-12-06 18:21:09 +, Simon Riggs wrote:
On 6 December 2012 00:46, Robert Haas robertmh...@gmail.com wrote:
On Wed, Dec 5, 2012 at 6:45 PM, Simon Riggs
On Thu, Dec 6, 2012 at 1:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm against putting I/O coercion semantics into tupconvert, period. Ever.
If plpgsql wants that behavior rather than something more consistent
with the rest of the system, it needs to implement it for itself.
I'm sure that can
On 2012-12-06 13:59:32 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 15:08:51 -0300, Alvaro Herrera wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction.
On Thu, 2012-12-06 at 18:16 +, Simon Riggs wrote:
I tend to agree with Andres on this one. This feels a bit like
accepting a command but then not actually following-through on it
if it turns out we can't actually do it. If it's truely an optimization
(and I suspect my other
Jeff,
* Jeff Davis (pg...@j-davis.com) wrote:
That isn't a problem, because the other session won't see the tuple in
pg_class until the creating transaction commits, at which point the rows
have committed, too (because this would only kick in when the rows are
loaded in the same transaction
-Original Message-
From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-
ow...@postgresql.org] On Behalf Of Andres Freund
Sent: 06 December 2012 20:04
To: Simon Riggs
Cc: Robert Haas; Dimitri Fontaine; Josh Berkus;
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] ALTER
On 2012-12-06 20:27:33 +0100, Petr Jelinek wrote:
-Original Message-
From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-
ow...@postgresql.org] On Behalf Of Andres Freund
Sent: 06 December 2012 20:04
To: Simon Riggs
Cc: Robert Haas; Dimitri Fontaine; Josh Berkus;
-Original Message-
From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-
ow...@postgresql.org] On Behalf Of Andres Freund
Sent: 06 December 2012 20:44
To: Petr Jelinek
Cc: 'Simon Riggs'; 'Robert Haas'; 'Dimitri Fontaine'; 'Josh Berkus';
pgsql-
hack...@postgresql.org
Robert Haas robertmh...@gmail.com writes:
Are you going to commit a stripped-down version of the patch?
I set it back to waiting on author --- don't know if he wants to
produce a stripped-down version with no type coercions, or try to use
cast-based coercions.
regards,
Tom Lane t...@sss.pgh.pa.us writes:
Only for these new-style thingies. I am not suggesting breaking the
existing file-based implementation, only offering a parallel
catalog-based implementation too. We'd have to think about what to do
for name collisions --- probably having the catalog entry
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 18:42:22 +, Simon Riggs wrote:
in-between state means what? And what danger do you see?
For example during table rewrites we have a temporary pg_class entry
thats a full copy of the table, with a separate oid, relfilenode and
Hi,
I have attached the stripped-down version. I will leave the type coercions
support for a separate patch.
Regards,
--Asif
On Fri, Dec 7, 2012 at 1:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Are you going to commit a stripped-down version of the
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 13:59:32 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 15:08:51 -0300, Alvaro Herrera wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was
Alvaro Herrera wrote:
Background worker processes
I had tested this on EXEC_BACKEND some time ago, and it worked fine, but
I had neglected since then, and now I find it fails with a pretty
strange message on startup. Also, Andres and I have been talking about
other possible problems in that
Asif Rehman asifr.reh...@gmail.com writes:
I have attached the stripped-down version. I will leave the type coercions
support for a separate patch.
OK, I'll take a look at this one.
regards, tom lane
--
Sent via pgsql-hackers mailing list
From: Kevin Grittner kgri...@mail.com
If you are attempting a PITR-style recovery and you want to include
WAL entries from the partially-copied file, pad a copy of it with
NUL bytes to the expected length.
I'm afraid This is unacceptably difficult, or almost impossible, for many PG
users.
On 6.12.2012 05:47, Shigeru Hanada wrote:
On Mon, Nov 12, 2012 at 4:36 AM, Tomas Vondra t...@fuzzy.cz wrote:
Hi,
I've prepared a slightly updated patch, based on the previous review.
See it attached.
All changes in v3 patch seem good, however I found some places which requires
cosmetic
From: Tom Lane t...@sss.pgh.pa.us
Well, that's unfortunate, but it's not clear that automatic recovery is
possible. The only way out of it would be if an undamaged copy of the
segment was in pg_xlog/ ... but if I recall the logic correctly, we'd
not even be trying to fetch from the archive if
On 2012-12-06 16:26:50 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 13:59:32 -0500, Tom Lane wrote:
Andres Freund and...@2ndquadrant.com writes:
On 2012-12-06 15:08:51 -0300, Alvaro Herrera wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a
On a new buildfarm member friarbird
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=friarbirddt=2012-12-06%2020%3A55%3A31,
configured with _DCLOBBER_CACHE_ALWAYS:
BEGIN;
TRUNCATE vistest;
COPY vistest FROM stdin CSV FREEZE;
+ NOTICE: FREEZE option specified but
Hi,
On 6.12.2012 23:45, MauMau wrote:
From: Tom Lane t...@sss.pgh.pa.us
Well, that's unfortunate, but it's not clear that automatic recovery is
possible. The only way out of it would be if an undamaged copy of the
segment was in pg_xlog/ ... but if I recall the logic correctly, we'd
not
On 6 December 2012 23:28, Andrew Dunstan and...@dunslane.net wrote:
On a new buildfarm member friarbird
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=friarbirddt=2012-12-06%2020%3A55%3A31,
configured with _DCLOBBER_CACHE_ALWAYS:
BEGIN;
TRUNCATE vistest;
COPY vistest
On 2012-12-06 23:59:06 +, Simon Riggs wrote:
On 6 December 2012 23:28, Andrew Dunstan and...@dunslane.net wrote:
On a new buildfarm member friarbird
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=friarbirddt=2012-12-06%2020%3A55%3A31,
configured with _DCLOBBER_CACHE_ALWAYS:
Andrew Dunstan and...@dunslane.net writes:
On a new buildfarm member friarbird
http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=friarbirddt=2012-12-06%2020%3A55%3A31,
configured with _DCLOBBER_CACHE_ALWAYS:
BEGIN;
TRUNCATE vistest;
COPY vistest FROM stdin CSV
On 7 December 2012 00:06, Andres Freund and...@2ndquadrant.com wrote:
Apparently the magic to preserve those values across cache resets isn't
strong enough for this. Seems bad, because that seems to mean a sinval
overflow leads to this and related optimizations being lost?
Which seems to back
On Thu, Dec 6, 2012 at 9:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
MauMau maumau...@gmail.com writes:
I'm using PostgreSQL 9.1.6 on Linux. I encountered a serious problem that
media recovery failed showing the following message:
FATAL: archive file 000100800028 has wrong size:
I got a report today on the IRC channel about a pg_upgrade problem with
upgrading clusters with indexes that exist but are invalid.
For example, if you use CREATE INDEX CONCURRENTLY, then shut down the
server while it is running, the index will be left as INVALID; from our
CREATE INDEX docs:
There are a few possible fixes. The first would be to have pg_upgrade
throw an error on any invalid index in the old cluster. Another option
would be to preserve the invalid state in pg_dump --binary-upgrade.
Or to not dump invalid indexes at all in --binary-upgrade mode.
I also need help
Bruce Momjian wrote:
There are a few possible fixes. The first would be to have pg_upgrade
throw an error on any invalid index in the old cluster. Another option
would be to preserve the invalid state in pg_dump --binary-upgrade.
Yet another option would be for pg_dump --binary-upgrade to
On Thu, Dec 6, 2012 at 09:35:19PM -0300, Alvaro Herrera wrote:
Bruce Momjian wrote:
There are a few possible fixes. The first would be to have pg_upgrade
throw an error on any invalid index in the old cluster. Another option
would be to preserve the invalid state in pg_dump
On Thu, 2012-12-06 at 14:18 -0500, Stephen Frost wrote:
begin;
You need to do a SELECT here to actually get a snapshot.
session b
-
begin;
Bruce Momjian br...@momjian.us writes:
Yes, I thought of not dumping it. The problem is that we don't delete
the index when it fails, so I assumed we didn't want to lose the index
creation information. I need to understand why we did that.
Because CREATE INDEX CONCURRENTLY can't drop the
On Thu, 2012-12-06 at 16:31 -0800, Josh Berkus wrote:
There are a few possible fixes. The first would be to have pg_upgrade
throw an error on any invalid index in the old cluster. Another option
would be to preserve the invalid state in pg_dump --binary-upgrade.
Or to not dump invalid
On 12/06/2012 07:58 PM, Jeff Davis wrote:
On Thu, 2012-12-06 at 16:31 -0800, Josh Berkus wrote:
There are a few possible fixes. The first would be to have pg_upgrade
throw an error on any invalid index in the old cluster. Another option
would be to preserve the invalid state in pg_dump
* Jeff Davis (pg...@j-davis.com) wrote:
However, the rows will *not* be visible, because the serializable
snapshot doesn't contain the inserting transaction.
That's what we've got now and what would be expected, however...
Think about the current behavior: right after the commit, another
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
Yes, I thought of not dumping it. The problem is that we don't delete
the index when it fails, so I assumed we didn't want to lose the index
creation information. I need to understand why we
On Thu, 2012-12-06 at 20:12 -0500, Stephen Frost wrote:
The command is 'FREEZE', which sounded to me like the transaction ID
would be set to FrozenXID, meaning that we wouldn't be able to tell if
the inserting transaction was before or after ours...
Freezing does lose information, but I
On Thu, 2012-12-06 at 20:05 -0500, Andrew Dunstan wrote:
I think I prefer the first suggestion. If they are trying to upgrade
when there's an invalid index presumably they aren't aware of the
invalidity (or they would have done something about it). It would be
better to fail and make them
Jeff,
* Jeff Davis (pg...@j-davis.com) wrote:
On Thu, 2012-12-06 at 20:12 -0500, Stephen Frost wrote:
The command is 'FREEZE', which sounded to me like the transaction ID
would be set to FrozenXID, meaning that we wouldn't be able to tell if
the inserting transaction was before or after
On Thu, Dec 6, 2012 at 12:29 AM, Phil Sorber p...@omniti.com wrote:
On Wed, Dec 5, 2012 at 8:53 AM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:
No, I think it is the reference docs on the returned value that must be
fixed. That is, instead of saying that the return value correspond to
Bruce Momjian br...@momjian.us writes:
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote:
Because CREATE INDEX CONCURRENTLY can't drop the index if it's already
failed. It's not because we want to do that, it's an implementation
restriction of the horrid kluge that is CREATE/DROP INDEX
On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote:
Because CREATE INDEX CONCURRENTLY can't drop the index if it's already
failed. It's not because we want to do that, it's an
On 12/07/2012 02:53 AM, Tom Lane wrote:
Vlad Arkhipov arhi...@dc.baikal.ru writes:
In a BEFORE UPDATE trigger I need to know whether the row was previously
modified by this transaction. Is it safe to use xmin and txid_current()
for this purpose (xmin is 32-bit txid type but txid_current()
On 12/06/2012 09:23 PM, Bruce Momjian wrote:
On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote:
Because CREATE INDEX CONCURRENTLY can't drop the index if it's already
failed. It's not
* Andrew Dunstan (and...@dunslane.net) wrote:
So we'll lose the index definition and leave some files behind? This
sounds a bit messy to say the least.
Agreed.
Making the user fix it seems much more sensible to me. Otherwise I
suspect we'll find users who get strangely surprised when they
Stephen Frost sfr...@snowman.net writes:
* Andrew Dunstan (and...@dunslane.net) wrote:
Making the user fix it seems much more sensible to me. Otherwise I
suspect we'll find users who get strangely surprised when they can
no longer find any trace of an expected index in their upgraded
On Thu, Dec 6, 2012 at 09:41:00PM -0500, Andrew Dunstan wrote:
On 12/06/2012 09:23 PM, Bruce Momjian wrote:
On Thu, Dec 6, 2012 at 09:10:21PM -0500, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
On Thu, Dec 6, 2012 at 07:53:57PM -0500, Tom Lane wrote:
Because CREATE INDEX
On Thu, Dec 6, 2012 at 09:45:11PM -0500, Stephen Frost wrote:
* Andrew Dunstan (and...@dunslane.net) wrote:
So we'll lose the index definition and leave some files behind? This
sounds a bit messy to say the least.
Agreed.
Making the user fix it seems much more sensible to me.
1 - 100 of 116 matches
Mail list logo