Re: Fast default stuff versus pg_upgrade

2018-06-22 Thread Andrew Dunstan




On 06/21/2018 04:41 PM, Tom Lane wrote:

Andrew Dunstan  writes:

I left that for a separate exercise. I think this does things the way
you want. I must admit to being a bit surprised, I was expecting you to
have more to say about the upgrade function than the pg_dump changes :-)

Well, the upgrade function is certainly a bit ugly (I'm generally allergic
to patches that result in a large increase in the number of #includes in
a file --- it suggests that the code was put in an inappropriate place).
But I don't see a good way to make it better, at least not without heavy
refactoring of StoreAttrDefault so that some code could be shared.

I think this is probably OK now, except for one thing: you're likely
to have issues if a dropped column has an attmissingval, because often
the value won't agree with the bogus "integer" type we use for those;
worse, the array might refer to a type that no longer exists.
One way to improve that is to change

   "CASE WHEN a.atthasmissing THEN a.attmissingval "
   "ELSE null END AS attmissingval "

to

   "CASE WHEN a.atthasmissing AND NOT a.attisdropped 
THEN a.attmissingval "
   "ELSE null END AS attmissingval "

However, this might be another reason to reconsider the decision to use
anyarray: it could easily lead to situations where harmless-looking
queries like "select * from pg_attribute" fail.  Or maybe we should tweak
DROP COLUMN so that it forcibly resets atthasmissing/attmissingval along
with setting atthasdropped, so that the case can't arise.

Like Andres, I haven't actually tested, just eyeballed it.





I moved the bulk of the code into a function in heap.c where it seemed 
right at home.


I added removal of missing values from dropped columns, as well as 
qualifying the test as above.


The indent issue raised by Alvaro is also fixed. I included your patch 
fixing the regression tests.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Alvaro Herrera  writes:
> In terms of pgindent, I'm surprised about these lines:
> +missingval = OidFunctionCall3(
> +  F_ARRAY_IN,

> Why did you put a newline there?  In ancient times there was a reason
> for that in some cases, because pgindent would move the argument to the
> left of the open parens, but it doesn't do that anymore and IMO it's
> just ugly.  We have quite a few leftovers from this ancient practice,
> I've been thinking about removing these ...

I think some people feel this is good style, but I agree with you
about not liking it.  A related practice I could do without is eating
an extra line for an argument-closing paren, as in this example in
tsquery_op.c:

Datum
tsq_mcontained(PG_FUNCTION_ARGS)
{
PG_RETURN_DATUM(
DirectFunctionCall2(
tsq_mcontains,
PG_GETARG_DATUM(1),
PG_GETARG_DATUM(0)
)
);
}

Aside from the waste of vertical space, it's never very clear to me
(nor, evidently, to pgindent) how such a paren ought to be indented.
So to my eye this could be four lines shorter and look better.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Alvaro Herrera
In terms of pgindent, I'm surprised about these lines:

+missingval = OidFunctionCall3(
+  F_ARRAY_IN,

Why did you put a newline there?  In ancient times there was a reason
for that in some cases, because pgindent would move the argument to the
left of the open parens, but it doesn't do that anymore and IMO it's
just ugly.  We have quite a few leftovers from this ancient practice,
I've been thinking about removing these ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andrew Dunstan  writes:
> I left that for a separate exercise. I think this does things the way 
> you want. I must admit to being a bit surprised, I was expecting you to 
> have more to say about the upgrade function than the pg_dump changes :-)

Well, the upgrade function is certainly a bit ugly (I'm generally allergic
to patches that result in a large increase in the number of #includes in
a file --- it suggests that the code was put in an inappropriate place).
But I don't see a good way to make it better, at least not without heavy
refactoring of StoreAttrDefault so that some code could be shared.

I think this is probably OK now, except for one thing: you're likely
to have issues if a dropped column has an attmissingval, because often
the value won't agree with the bogus "integer" type we use for those;
worse, the array might refer to a type that no longer exists.
One way to improve that is to change

  "CASE WHEN a.atthasmissing THEN a.attmissingval "
  "ELSE null END AS attmissingval "

to

  "CASE WHEN a.atthasmissing AND NOT a.attisdropped 
THEN a.attmissingval "
  "ELSE null END AS attmissingval "

However, this might be another reason to reconsider the decision to use
anyarray: it could easily lead to situations where harmless-looking
queries like "select * from pg_attribute" fail.  Or maybe we should tweak
DROP COLUMN so that it forcibly resets atthasmissing/attmissingval along
with setting atthasdropped, so that the case can't arise.

Like Andres, I haven't actually tested, just eyeballed it.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan



On 06/21/2018 01:53 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 06/21/2018 01:44 PM, Tom Lane wrote:

So I'm thinking that the attidentity code is just wrong, and you should
change that too while you're at it.

That should be backpatched if changed, no? I don't think we'd want this
to get out of sync between the branches. It would make later
backpatching more difficult for one thing.

If you feel like it.  But if there's attmissingval code right next to it
as of v11, then backpatches wouldn't apply cleanly anyway due to lack of
context match, so I doubt there's really much gain to be had.






I left that for a separate exercise. I think this does things the way 
you want. I must admit to being a bit surprised, I was expecting you to 
have more to say about the upgrade function than the pg_dump changes :-)


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..7a2f785 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,13 +11,20 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include "utils/syscache.h"
 
 
 #define CHECK_IS_BINARY_UPGRADE	\
@@ -192,3 +199,63 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
 
 	PG_RETURN_VOID();
 }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+	Oid			table_id = PG_GETARG_OID(0);
+	text	   *attname = PG_GETARG_TEXT_P(1);
+	text	   *value = PG_GETARG_TEXT_P(2);
+	Datum		valuesAtt[Natts_pg_attribute];
+	bool		nullsAtt[Natts_pg_attribute];
+	bool		replacesAtt[Natts_pg_attribute];
+	Datum		missingval;
+	Form_pg_attribute attStruct;
+	Relation	attrrel,
+tablerel;
+	HeapTuple	atttup,
+newtup;
+	char	   *cattname = text_to_cstring(attname);
+	char	   *cvalue = text_to_cstring(value);
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* lock the table the attribute belongs to */
+	tablerel = heap_open(table_id, AccessExclusiveLock);
+
+	/* Lock the attribute row and get the data */
+	attrrel = heap_open(AttributeRelationId, RowExclusiveLock);
+	atttup = SearchSysCacheAttName(table_id, cattname);
+	if (!HeapTupleIsValid(atttup))
+		elog(ERROR, "cache lookup failed for attribute %s of relation %u",
+			 cattname, table_id);
+	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+	/* get an array value from the value string */
+	missingval = OidFunctionCall3(
+  F_ARRAY_IN,
+  CStringGetDatum(cvalue),
+  ObjectIdGetDatum(attStruct->atttypid),
+  Int32GetDatum(attStruct->atttypmod));
+
+	/* update the tuple - set atthasmissing and attmissingval */
+	MemSet(valuesAtt, 0, sizeof(valuesAtt));
+	MemSet(nullsAtt, false, sizeof(nullsAtt));
+	MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+	valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
+	replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+	replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+
+	newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+			   valuesAtt, nullsAtt, replacesAtt);
+	CatalogTupleUpdate(attrrel, >t_self, newtup);
+
+	/* clean up */
+	ReleaseSysCache(atttup);
+	heap_close(attrrel, RowExclusiveLock);
+	heap_close(tablerel, AccessExclusiveLock);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..40d1eb0 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8103,6 +8103,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_attoptions;
 	int			i_attcollation;
 	int			i_attfdwoptions;
+	int			i_attmissingval;
 	PGresult   *res;
 	int			ntups;
 	bool		hasdefaults;
@@ -8132,7 +8133,34 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 
 		resetPQExpBuffer(q);
 
-		if (fout->remoteVersion >= 10)
+		if (fout->remoteVersion >= 11)
+		{
+			/* atthasmissing and attmissingval are new in 11 */
+			appendPQExpBuffer(q, "SELECT a.attnum, a.attname, a.atttypmod, "
+			  "a.attstattarget, a.attstorage, t.typstorage, "
+			  "a.attnotnull, a.atthasdef, a.attisdropped, "
+			  "a.attlen, a.attalign, a.attislocal, "
+			  "pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname, "
+			  "array_to_string(a.attoptions, ', ') AS attoptions, "
+			  "CASE WHEN a.attcollation <> t.typcollation "
+			  "THEN a.attcollation ELSE 0 END AS attcollation, "
+			  

Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andrew Dunstan  writes:
> On 06/21/2018 01:44 PM, Tom Lane wrote:
>> So I'm thinking that the attidentity code is just wrong, and you should
>> change that too while you're at it.

> That should be backpatched if changed, no? I don't think we'd want this 
> to get out of sync between the branches. It would make later 
> backpatching more difficult for one thing.

If you feel like it.  But if there's attmissingval code right next to it
as of v11, then backpatches wouldn't apply cleanly anyway due to lack of
context match, so I doubt there's really much gain to be had.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andres Freund  writes:
> On 2018-06-21 13:44:19 -0400, Tom Lane wrote:
>> Actually, now that I think about it, there is a concrete reason for the
>> historical pattern: it provides a cross-check that you did not fat-finger
>> the query, ie misspell the column alias vs the PQfnumber parameter.  This
>> gets more valuable the more per-version variants of the query there are.
>> With the way the attidentity code does it, it would just silently act as
>> though the column has its default value, which you might or might not
>> notice in cursory testing.  Getting visible bleats about column number -1
>> is much more likely to get your attention.

> To me that benefit is far smaller than the increased likelihood of
> screwing something up because you'd to touch pg_dump support for
> postgres versions one likely doesn't readily have available for testing.

I beg to differ: code that works "by omission" is just as likely to be
wrong.  Anyway, our solution for unmaintainable back branches has been
to drop support, cf where we got rid of the pre-8.0 queries awhile back.
One (admittedly small) problem with the "i_attidentity >= 0 ? ..." coding
is that it's not obvious when you can get rid of it because you dropped
the last old branch that needed it.  After that, it's just a hazard for
hiding mistakes.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan




On 06/21/2018 01:44 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 06/21/2018 01:18 PM, Tom Lane wrote:

I might be OK with a patch that converts *all* of pg_dump's cross-version
difference handling code to depend on PQfnumber silently returning -1
rather than failing, but I don't want to see it done like that in just
one or two places.

I don't mind changing it. But please note that I wouldn't have done it
that way unless there were a precedent. I fully expected to add dummy
values to all the previous queries, but when I couldn't find attidentity
in them to put them next to I followed that example.

Actually, now that I think about it, there is a concrete reason for the
historical pattern: it provides a cross-check that you did not fat-finger
the query, ie misspell the column alias vs the PQfnumber parameter.  This
gets more valuable the more per-version variants of the query there are.
With the way the attidentity code does it, it would just silently act as
though the column has its default value, which you might or might not
notice in cursory testing.  Getting visible bleats about column number -1
is much more likely to get your attention.

So I'm thinking that the attidentity code is just wrong, and you should
change that too while you're at it.






That should be backpatched if changed, no? I don't think we'd want this 
to get out of sync between the branches. It would make later 
backpatching more difficult for one thing.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andres Freund
On 2018-06-21 13:44:19 -0400, Tom Lane wrote:
> Andrew Dunstan  writes:
> > On 06/21/2018 01:18 PM, Tom Lane wrote:
> >> I might be OK with a patch that converts *all* of pg_dump's cross-version
> >> difference handling code to depend on PQfnumber silently returning -1
> >> rather than failing, but I don't want to see it done like that in just
> >> one or two places.
> 
> > I don't mind changing it. But please note that I wouldn't have done it 
> > that way unless there were a precedent. I fully expected to add dummy 
> > values to all the previous queries, but when I couldn't find attidentity 
> > in them to put them next to I followed that example.
> 
> Actually, now that I think about it, there is a concrete reason for the
> historical pattern: it provides a cross-check that you did not fat-finger
> the query, ie misspell the column alias vs the PQfnumber parameter.  This
> gets more valuable the more per-version variants of the query there are.
> With the way the attidentity code does it, it would just silently act as
> though the column has its default value, which you might or might not
> notice in cursory testing.  Getting visible bleats about column number -1
> is much more likely to get your attention.

To me that benefit is far smaller than the increased likelihood of
screwing something up because you'd to touch pg_dump support for
postgres versions one likely doesn't readily have available for testing.

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andrew Dunstan  writes:
> On 06/21/2018 01:18 PM, Tom Lane wrote:
>> I might be OK with a patch that converts *all* of pg_dump's cross-version
>> difference handling code to depend on PQfnumber silently returning -1
>> rather than failing, but I don't want to see it done like that in just
>> one or two places.

> I don't mind changing it. But please note that I wouldn't have done it 
> that way unless there were a precedent. I fully expected to add dummy 
> values to all the previous queries, but when I couldn't find attidentity 
> in them to put them next to I followed that example.

Actually, now that I think about it, there is a concrete reason for the
historical pattern: it provides a cross-check that you did not fat-finger
the query, ie misspell the column alias vs the PQfnumber parameter.  This
gets more valuable the more per-version variants of the query there are.
With the way the attidentity code does it, it would just silently act as
though the column has its default value, which you might or might not
notice in cursory testing.  Getting visible bleats about column number -1
is much more likely to get your attention.

So I'm thinking that the attidentity code is just wrong, and you should
change that too while you're at it.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan




On 06/21/2018 12:08 PM, Tom Lane wrote:

I wrote:

We could do without the unrelated whitespace changes in dumpDefaultACL,
too (or did pgindent introduce those?  Seems surprising ... oh, looks
like "type" has somehow snuck into typedefs.list.  Time for another
blacklist entry.)

Hmm .. actually, "type" already is in pgindent's blacklist.  Are you
using an up-to-date pgindent'ing setup?






I am now, This was a holdover from before I updated.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan




On 06/21/2018 01:18 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 06/21/2018 12:39 PM, Tom Lane wrote:

Andres Freund  writes:

On June 21, 2018 9:04:28 AM PDT, Tom Lane  wrote:
This isn't really ready to go.  One clear problem is that you broke
pg_dump'ing from any pre-v11 version, because you did not add suitable
null outputs to the pre-v11 query variants in getTableAttrs.
I followed the pattern used for attidentity. But why will it spit out
warnings? It doesn't ask for the missing stuff from an older version.

Oh, I see.  Well, the short answer is that that's not the style we use
in pg_dump, and the attidentity code is inappropriate/wrong too IMO.
When something has been done one way a hundred times before, thinking
you're too smart for that and you'll do it some other way does not lend
itself to code clarity or reviewability.

I might be OK with a patch that converts *all* of pg_dump's cross-version
difference handling code to depend on PQfnumber silently returning -1
rather than failing, but I don't want to see it done like that in just
one or two places.





I don't mind changing it. But please note that I wouldn't have done it 
that way unless there were a precedent. I fully expected to add dummy 
values to all the previous queries, but when I couldn't find attidentity 
in them to put them next to I followed that example.


I don't think it's at all a bad idea, TBH. It means you only have to add 
one query per version rather than having to adjust all of them.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andrew Dunstan  writes:
> On 06/21/2018 12:39 PM, Tom Lane wrote:
>> Andres Freund  writes:
> On June 21, 2018 9:04:28 AM PDT, Tom Lane  wrote:
> This isn't really ready to go.  One clear problem is that you broke
> pg_dump'ing from any pre-v11 version, because you did not add suitable
> null outputs to the pre-v11 query variants in getTableAttrs.

> I followed the pattern used for attidentity. But why will it spit out 
> warnings? It doesn't ask for the missing stuff from an older version.

Oh, I see.  Well, the short answer is that that's not the style we use
in pg_dump, and the attidentity code is inappropriate/wrong too IMO.
When something has been done one way a hundred times before, thinking
you're too smart for that and you'll do it some other way does not lend
itself to code clarity or reviewability.

I might be OK with a patch that converts *all* of pg_dump's cross-version
difference handling code to depend on PQfnumber silently returning -1
rather than failing, but I don't want to see it done like that in just
one or two places.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan




On 06/21/2018 12:39 PM, Tom Lane wrote:

Andres Freund  writes:

On June 21, 2018 9:04:28 AM PDT, Tom Lane  wrote:

This isn't really ready to go.  One clear problem is that you broke
pg_dump'ing from any pre-v11 version, because you did not add suitable
null outputs to the pre-v11 query variants in getTableAttrs.

Thought the same for a bit - think it's OK though, because there a check for 
PQfnumber being bigger than 0...

It won't actually crash, but it will spit out lots of ugly warnings.





I followed the pattern used for attidentity. But why will it spit out 
warnings? It doesn't ask for the missing stuff from an older version.


Incidentally, I just checked this by applying the patch and then running 
through the buildfarm's cross version upgrade check. All was kosher. 
Upgrade from all live branches to the patched master went without a hitch.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andres Freund  writes:
> On June 21, 2018 9:04:28 AM PDT, Tom Lane  wrote:
>> This isn't really ready to go.  One clear problem is that you broke
>> pg_dump'ing from any pre-v11 version, because you did not add suitable
>> null outputs to the pre-v11 query variants in getTableAttrs.

> Thought the same for a bit - think it's OK though, because there a check for 
> PQfnumber being bigger than 0...

It won't actually crash, but it will spit out lots of ugly warnings.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andres Freund



On June 21, 2018 9:04:28 AM PDT, Tom Lane  wrote:
>Andrew Dunstan  writes:
>> Patch after pgindent attached. This will involve a catversion bump
>since 
>> we're adding a new function.
>
>This isn't really ready to go.  One clear problem is that you broke
>pg_dump'ing from any pre-v11 version, because you did not add suitable
>null outputs to the pre-v11 query variants in getTableAttrs.

Thought the same for a bit - think it's OK though, because there a check for 
PQfnumber being bigger than 0...

Andres
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
I wrote:
> We could do without the unrelated whitespace changes in dumpDefaultACL,
> too (or did pgindent introduce those?  Seems surprising ... oh, looks
> like "type" has somehow snuck into typedefs.list.  Time for another
> blacklist entry.)

Hmm .. actually, "type" already is in pgindent's blacklist.  Are you
using an up-to-date pgindent'ing setup?

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Tom Lane
Andrew Dunstan  writes:
> Patch after pgindent attached. This will involve a catversion bump since 
> we're adding a new function.

This isn't really ready to go.  One clear problem is that you broke
pg_dump'ing from any pre-v11 version, because you did not add suitable
null outputs to the pre-v11 query variants in getTableAttrs.

Personally, I'd have made the pg_dump changes simpler and cheaper by
storing only one new value per column not two.  You could have just
stored attmissingval, or if you want to be paranoid you could do

case when atthasmissing then attmissingval else null end

We could do without the unrelated whitespace changes in dumpDefaultACL,
too (or did pgindent introduce those?  Seems surprising ... oh, looks
like "type" has somehow snuck into typedefs.list.  Time for another
blacklist entry.)

In dumpTableSchema, I'd suggest emitting the numeric table OID, rather
than hacking around with regclass.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-21 Thread Andrew Dunstan



On 06/20/2018 09:04 PM, Andres Freund wrote:


Probably couldn't hurt to run the changed files through
pgindent and fix the damage...





Looks reasonable to me, but I've not tested it.



Thanks

Patch after pgindent attached. This will involve a catversion bump since 
we're adding a new function.


cheers

andrew


--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..7a2f785 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,13 +11,20 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include "utils/syscache.h"
 
 
 #define CHECK_IS_BINARY_UPGRADE	\
@@ -192,3 +199,63 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
 
 	PG_RETURN_VOID();
 }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+	Oid			table_id = PG_GETARG_OID(0);
+	text	   *attname = PG_GETARG_TEXT_P(1);
+	text	   *value = PG_GETARG_TEXT_P(2);
+	Datum		valuesAtt[Natts_pg_attribute];
+	bool		nullsAtt[Natts_pg_attribute];
+	bool		replacesAtt[Natts_pg_attribute];
+	Datum		missingval;
+	Form_pg_attribute attStruct;
+	Relation	attrrel,
+tablerel;
+	HeapTuple	atttup,
+newtup;
+	char	   *cattname = text_to_cstring(attname);
+	char	   *cvalue = text_to_cstring(value);
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* lock the table the attribute belongs to */
+	tablerel = heap_open(table_id, AccessExclusiveLock);
+
+	/* Lock the attribute row and get the data */
+	attrrel = heap_open(AttributeRelationId, RowExclusiveLock);
+	atttup = SearchSysCacheAttName(table_id, cattname);
+	if (!HeapTupleIsValid(atttup))
+		elog(ERROR, "cache lookup failed for attribute %s of relation %u",
+			 cattname, table_id);
+	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+	/* get an array value from the value string */
+	missingval = OidFunctionCall3(
+  F_ARRAY_IN,
+  CStringGetDatum(cvalue),
+  ObjectIdGetDatum(attStruct->atttypid),
+  Int32GetDatum(attStruct->atttypmod));
+
+	/* update the tuple - set atthasmissing and attmissingval */
+	MemSet(valuesAtt, 0, sizeof(valuesAtt));
+	MemSet(nullsAtt, false, sizeof(nullsAtt));
+	MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+	valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
+	replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+	replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+
+	newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+			   valuesAtt, nullsAtt, replacesAtt);
+	CatalogTupleUpdate(attrrel, >t_self, newtup);
+
+	/* clean up */
+	ReleaseSysCache(atttup);
+	heap_close(attrrel, RowExclusiveLock);
+	heap_close(tablerel, AccessExclusiveLock);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..e06ed60 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8095,6 +8095,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_typstorage;
 	int			i_attnotnull;
 	int			i_atthasdef;
+	int			i_atthasmissing;
 	int			i_attidentity;
 	int			i_attisdropped;
 	int			i_attlen;
@@ -8103,6 +8104,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_attoptions;
 	int			i_attcollation;
 	int			i_attfdwoptions;
+	int			i_attmissingval;
 	PGresult   *res;
 	int			ntups;
 	bool		hasdefaults;
@@ -8132,7 +8134,33 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 
 		resetPQExpBuffer(q);
 
-		if (fout->remoteVersion >= 10)
+		if (fout->remoteVersion >= 11)
+		{
+			/* atthasmissing and attmissingval are new in 11 */
+			appendPQExpBuffer(q, "SELECT a.attnum, a.attname, a.atttypmod, "
+			  "a.attstattarget, a.attstorage, t.typstorage, "
+			  "a.attnotnull, a.atthasdef, a.attisdropped, "
+			  "a.attlen, a.attalign, a.attislocal, "
+			  "pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname, "
+			  "array_to_string(a.attoptions, ', ') AS attoptions, "
+			  "CASE WHEN a.attcollation <> t.typcollation "
+			  "THEN a.attcollation ELSE 0 END AS attcollation, "
+			  "a.atthasmissing, a.attidentity, "
+			  "pg_catalog.array_to_string(ARRAY("
+			  "SELECT pg_catalog.quote_ident(option_name) || "
+			  "' ' || pg_catalog.quote_literal(option_value) "
+			  "FROM pg_catalog.pg_options_to_table(attfdwoptions) "
+			  "ORDER BY option_name"
+			  

Re: Fast default stuff versus pg_upgrade

2018-06-20 Thread Andres Freund
Hi,

On 2018-06-20 20:53:34 -0400, Andrew Dunstan wrote:
> This version adds a lock on the table owning the attribute.

Cool.
>  
>   /*
> +  * in binary upgrade mode, update the catalog with any missing 
> values
> +  * that might be present.
> +  */
> + if (dopt->binary_upgrade)
> + {
> + for (j = 0; j < tbinfo->numatts; j++)
> + {
> + if (tbinfo->atthasmissing[j])
> + {
> + appendPQExpBufferStr(q, "\n-- set 
> missing value.\n");
> + appendPQExpBufferStr(q,
> + 
>  "SELECT pg_catalog.binary_upgrade_set_missing_value(");
> + appendStringLiteralAH(q,qualrelname, 
> fout);

missing space.  Probably couldn't hurt to run the changed files through
pgindent and fix the damage...

> + appendPQExpBufferStr(q, 
> "::pg_catalog.regclass,");
> + appendStringLiteralAH(q, 
> tbinfo->attnames[j], fout);
> + appendPQExpBufferStr(q,",");

same.


> + appendStringLiteralAH(q, 
> tbinfo->attmissingval[j], fout);
> + appendPQExpBufferStr(q,");\n\n");

same.


Looks reasonable to me, but I've not tested it.

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-20 Thread Andrew Dunstan



On 06/20/2018 01:46 PM, Andrew Dunstan wrote:




Attached deals with all those issues except locking.




This version adds a lock on the table owning the attribute.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..2480fa0 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,13 +11,20 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include "utils/syscache.h"
 
 
 #define CHECK_IS_BINARY_UPGRADE	\
@@ -192,3 +199,63 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
 
 	PG_RETURN_VOID();
 }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+	Oid		table_id = PG_GETARG_OID(0);
+	text*attname = PG_GETARG_TEXT_P(1);
+	text*value = PG_GETARG_TEXT_P(2);
+	DatumvaluesAtt[Natts_pg_attribute];
+	bool nullsAtt[Natts_pg_attribute];
+	bool replacesAtt[Natts_pg_attribute];
+	Datummissingval;
+	Form_pg_attribute attStruct;
+	Relationattrrel, tablerel;
+	HeapTuple   atttup, newtup;
+	char*cattname = text_to_cstring(attname);
+	char*cvalue = text_to_cstring(value);
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* lock the table the attribute belongs to */
+	tablerel = heap_open(table_id, AccessExclusiveLock);
+
+	/* Lock the attribute row and get the data */
+	attrrel = heap_open(AttributeRelationId, RowExclusiveLock);
+	atttup = SearchSysCacheAttName(table_id, cattname);
+	if (!HeapTupleIsValid(atttup))
+		elog(ERROR, "cache lookup failed for attribute %s of relation %u",
+			 cattname, table_id);
+	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+	/* get an array value from the value string */
+
+	/* find the io info for an arbitrary array type to get array_in Oid */
+	missingval = OidFunctionCall3(
+		F_ARRAY_IN,
+		CStringGetDatum(cvalue),
+		ObjectIdGetDatum(attStruct->atttypid),
+		Int32GetDatum(attStruct->atttypmod));
+
+	/* update the tuple - set atthasmissing and attmissingval */
+	MemSet(valuesAtt, 0, sizeof(valuesAtt));
+	MemSet(nullsAtt, false, sizeof(nullsAtt));
+	MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+	valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
+	replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+	replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+
+	newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+			   valuesAtt, nullsAtt, replacesAtt);
+	CatalogTupleUpdate(attrrel, >t_self, newtup);
+
+	/* clean up */
+	ReleaseSysCache(atttup);
+	heap_close(attrrel, RowExclusiveLock);
+	heap_close(tablerel, AccessExclusiveLock);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..22be3c2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8095,6 +8095,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_typstorage;
 	int			i_attnotnull;
 	int			i_atthasdef;
+	int			i_atthasmissing;
 	int			i_attidentity;
 	int			i_attisdropped;
 	int			i_attlen;
@@ -8103,6 +8104,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_attoptions;
 	int			i_attcollation;
 	int			i_attfdwoptions;
+	int			i_attmissingval;
 	PGresult   *res;
 	int			ntups;
 	bool		hasdefaults;
@@ -8132,7 +8134,33 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 
 		resetPQExpBuffer(q);
 
-		if (fout->remoteVersion >= 10)
+		if (fout->remoteVersion >= 11)
+		{
+			/* atthasmissing and attmissingval are new in 11 */
+			appendPQExpBuffer(q, "SELECT a.attnum, a.attname, a.atttypmod, "
+			  "a.attstattarget, a.attstorage, t.typstorage, "
+			  "a.attnotnull, a.atthasdef, a.attisdropped, "
+			  "a.attlen, a.attalign, a.attislocal, "
+			  "pg_catalog.format_type(t.oid,a.atttypmod) AS atttypname, "
+			  "array_to_string(a.attoptions, ', ') AS attoptions, "
+			  "CASE WHEN a.attcollation <> t.typcollation "
+			  "THEN a.attcollation ELSE 0 END AS attcollation, "
+			  "a.atthasmissing, a.attidentity, "
+			  "pg_catalog.array_to_string(ARRAY("
+			  "SELECT pg_catalog.quote_ident(option_name) || "
+			  "' ' || pg_catalog.quote_literal(option_value) "
+			  "FROM pg_catalog.pg_options_to_table(attfdwoptions) "
+			  "ORDER BY option_name"
+			  "), E',\n') AS attfdwoptions ,"
+			  "a.attmissingval "
+			  "FROM 

Re: Fast default stuff versus pg_upgrade

2018-06-20 Thread Andrew Dunstan



On 06/20/2018 12:58 PM, Andres Freund wrote:

Hi,

Just a quick scan-through review:

On 2018-06-20 12:51:07 -0400, Andrew Dunstan wrote:

diff --git a/src/backend/utils/adt/pg_upgrade_support.c 
b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..2666ab2 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,14 +11,19 @@
  
  #include "postgres.h"
  
+#include "access/heapam.h"

+#include "access/htup_details.h"
  #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
  #include "catalog/namespace.h"
  #include "catalog/pg_type.h"
  #include "commands/extension.h"
  #include "miscadmin.h"
  #include "utils/array.h"
  #include "utils/builtins.h"
-
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include "utils/syscache.h"


Seems to delete a newline.



Will fix





  #define CHECK_IS_BINARY_UPGRADE   
\
  do {  
\
@@ -192,3 +197,66 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
  
  	PG_RETURN_VOID();

  }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+   Oid table_id = PG_GETARG_OID(0);
+   text*attname = PG_GETARG_TEXT_P(1);
+   text*value = PG_GETARG_TEXT_P(2);
+   DatumvaluesAtt[Natts_pg_attribute];
+   bool nullsAtt[Natts_pg_attribute];
+   bool replacesAtt[Natts_pg_attribute];
+   Datummissingval;
+   Form_pg_attribute attStruct;
+   Relationattrrel;
+   HeapTuple   atttup, newtup;
+   Oid inputfunc, inputparam;
+   char*cattname = text_to_cstring(attname);
+   char*cvalue = text_to_cstring(value);
+
+   CHECK_IS_BINARY_UPGRADE;
+
+   /* Lock the attribute row and get the data */
+   attrrel = heap_open(AttributeRelationId, RowExclusiveLock);

Maybe I'm being paranoid, but I feel like the relation should also be
locked here.



I wondered about that. Other opinions?






+   atttup = SearchSysCacheAttName(table_id,cattname);

Missing space before 'cattname'.



Will fix.




+   if (!HeapTupleIsValid(atttup))
+   elog(ERROR, "cache lookup failed for attribute %s of relation 
%u",
+cattname, table_id);
+   attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+   /* get an array value from the value string */
+
+   /* find the io info for an arbitrary array type to get array_in Oid */
+   getTypeInputInfo(INT4ARRAYOID, , );

Maybe I'm confused, but why INT4ARRAYOID? If you're just looking for the
oid of array_in, why not use F_ARRAY_IN?



Brain fade? Will fix.





+   missingval = OidFunctionCall3(
+   inputfunc, /* array_in */
+   CStringGetDatum(cvalue),
+   ObjectIdGetDatum(attStruct->atttypid),
+   Int32GetDatum(attStruct->atttypmod));
+
+   /* update the tuple - set atthasmissing and attmissingval */
+   MemSet(valuesAtt, 0, sizeof(valuesAtt));
+   MemSet(nullsAtt, false, sizeof(nullsAtt));
+   MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+   valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
+   replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+   valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+   replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+
+   newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+  valuesAtt, nullsAtt, 
replacesAtt);
+   CatalogTupleUpdate(attrrel, >t_self, newtup);
+   /* clean up */
+   heap_freetuple(newtup);
+   ReleaseSysCache(atttup);
+   pfree(cattname);
+   pfree(cvalue);
+   pfree(DatumGetPointer(missingval));

Is this worth bothering with (except the ReleaseSysCache)? We'll clean
up via memory context soon, no?





Done from an abundance of caution. I'll remove them.


Thanks for the quick review.

Attached deals with all those issues except locking.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..aed85a3 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,13 +11,20 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
+#include "utils/fmgroids.h"
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include 

Re: Fast default stuff versus pg_upgrade

2018-06-20 Thread Andres Freund


Hi,

Just a quick scan-through review:

On 2018-06-20 12:51:07 -0400, Andrew Dunstan wrote:
> diff --git a/src/backend/utils/adt/pg_upgrade_support.c 
> b/src/backend/utils/adt/pg_upgrade_support.c
> index 0c54b02..2666ab2 100644
> --- a/src/backend/utils/adt/pg_upgrade_support.c
> +++ b/src/backend/utils/adt/pg_upgrade_support.c
> @@ -11,14 +11,19 @@
>  
>  #include "postgres.h"
>  
> +#include "access/heapam.h"
> +#include "access/htup_details.h"
>  #include "catalog/binary_upgrade.h"
> +#include "catalog/indexing.h"
>  #include "catalog/namespace.h"
>  #include "catalog/pg_type.h"
>  #include "commands/extension.h"
>  #include "miscadmin.h"
>  #include "utils/array.h"
>  #include "utils/builtins.h"
> -
> +#include "utils/lsyscache.h"
> +#include "utils/rel.h"
> +#include "utils/syscache.h"
>

Seems to delete a newline.


>  #define CHECK_IS_BINARY_UPGRADE  
> \
>  do { 
> \
> @@ -192,3 +197,66 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
>  
>   PG_RETURN_VOID();
>  }
> +
> +Datum
> +binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
> +{
> + Oid table_id = PG_GETARG_OID(0);
> + text*attname = PG_GETARG_TEXT_P(1);
> + text*value = PG_GETARG_TEXT_P(2);
> + DatumvaluesAtt[Natts_pg_attribute];
> + bool nullsAtt[Natts_pg_attribute];
> + bool replacesAtt[Natts_pg_attribute];
> + Datummissingval;
> + Form_pg_attribute attStruct;
> + Relationattrrel;
> + HeapTuple   atttup, newtup;
> + Oid inputfunc, inputparam;
> + char*cattname = text_to_cstring(attname);
> + char*cvalue = text_to_cstring(value);
> +
> + CHECK_IS_BINARY_UPGRADE;
> +
> + /* Lock the attribute row and get the data */
> + attrrel = heap_open(AttributeRelationId, RowExclusiveLock);

Maybe I'm being paranoid, but I feel like the relation should also be
locked here.


> + atttup = SearchSysCacheAttName(table_id,cattname);

Missing space before 'cattname'.


> + if (!HeapTupleIsValid(atttup))
> + elog(ERROR, "cache lookup failed for attribute %s of relation 
> %u",
> +  cattname, table_id);
> + attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
> +
> + /* get an array value from the value string */
> +
> + /* find the io info for an arbitrary array type to get array_in Oid */
> + getTypeInputInfo(INT4ARRAYOID, , );

Maybe I'm confused, but why INT4ARRAYOID? If you're just looking for the
oid of array_in, why not use F_ARRAY_IN?


> + missingval = OidFunctionCall3(
> + inputfunc, /* array_in */
> + CStringGetDatum(cvalue),
> + ObjectIdGetDatum(attStruct->atttypid),
> + Int32GetDatum(attStruct->atttypmod));
> +
> + /* update the tuple - set atthasmissing and attmissingval */
> + MemSet(valuesAtt, 0, sizeof(valuesAtt));
> + MemSet(nullsAtt, false, sizeof(nullsAtt));
> + MemSet(replacesAtt, false, sizeof(replacesAtt));
> +
> + valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
> + replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
> + valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
> + replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
> +
> + newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
> +valuesAtt, nullsAtt, 
> replacesAtt);
> + CatalogTupleUpdate(attrrel, >t_self, newtup);

> + /* clean up */
> + heap_freetuple(newtup); 
> + ReleaseSysCache(atttup);
> + pfree(cattname);
> + pfree(cvalue);
> + pfree(DatumGetPointer(missingval));

Is this worth bothering with (except the ReleaseSysCache)? We'll clean
up via memory context soon, no?

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-20 Thread Andrew Dunstan



On 06/19/2018 10:46 PM, Andres Freund wrote:

On 2018-06-19 22:41:26 -0400, Andrew Dunstan wrote:

This unfortunately crashes and burns if we use DirectFunctionCall3 to call
array_in, because it uses fn_extra. There is the CallerFInfoFunctionCall
stuff, but it only has 1 and 2 arg variants, and array_in takes 3. In
retrospect we should probably have added a 3 arg form - quite a few input
functions take 3 args. Anything else is likely to be rather uglier.

Attaching the failing patch. I'll attack this again in the morning.

Why don't you just use OidFunctionCall3? Or simply an explicit
fmgr_info(), InitFunctionCallInfoData(), FunctionCallInvoke() combo?






Thanks for that. I should not code late at night.

Here's a version that works in my testing with Tom's patch making sure 
there's a missing value to migrate applied. Thanks to Alvaro for some 
useful criticism - any errors are of course my responsibility.



cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..2666ab2 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,14 +11,19 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
-
+#include "utils/lsyscache.h"
+#include "utils/rel.h"
+#include "utils/syscache.h"
 
 #define CHECK_IS_BINARY_UPGRADE	\
 do {			\
@@ -192,3 +197,66 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
 
 	PG_RETURN_VOID();
 }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+	Oid		table_id = PG_GETARG_OID(0);
+	text*attname = PG_GETARG_TEXT_P(1);
+	text*value = PG_GETARG_TEXT_P(2);
+	DatumvaluesAtt[Natts_pg_attribute];
+	bool nullsAtt[Natts_pg_attribute];
+	bool replacesAtt[Natts_pg_attribute];
+	Datummissingval;
+	Form_pg_attribute attStruct;
+	Relationattrrel;
+	HeapTuple   atttup, newtup;
+	Oid inputfunc, inputparam;
+	char*cattname = text_to_cstring(attname);
+	char*cvalue = text_to_cstring(value);
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* Lock the attribute row and get the data */
+	attrrel = heap_open(AttributeRelationId, RowExclusiveLock);
+	atttup = SearchSysCacheAttName(table_id,cattname);
+	if (!HeapTupleIsValid(atttup))
+		elog(ERROR, "cache lookup failed for attribute %s of relation %u",
+			 cattname, table_id);
+	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+	/* get an array value from the value string */
+
+	/* find the io info for an arbitrary array type to get array_in Oid */
+	getTypeInputInfo(INT4ARRAYOID, , );
+	missingval = OidFunctionCall3(
+		inputfunc, /* array_in */
+		CStringGetDatum(cvalue),
+		ObjectIdGetDatum(attStruct->atttypid),
+		Int32GetDatum(attStruct->atttypmod));
+
+	/* update the tuple - set atthasmissing and attmissingval */
+	MemSet(valuesAtt, 0, sizeof(valuesAtt));
+	MemSet(nullsAtt, false, sizeof(nullsAtt));
+	MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+	valuesAtt[Anum_pg_attribute_atthasmissing - 1] = BoolGetDatum(true);
+	replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+	replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+
+	newtup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+			   valuesAtt, nullsAtt, replacesAtt);
+	CatalogTupleUpdate(attrrel, >t_self, newtup);
+
+	/* clean up */
+	heap_freetuple(newtup); 
+	ReleaseSysCache(atttup);
+	pfree(cattname);
+	pfree(cvalue);
+	pfree(DatumGetPointer(missingval));
+
+	heap_close(attrrel, RowExclusiveLock);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..22be3c2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8095,6 +8095,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_typstorage;
 	int			i_attnotnull;
 	int			i_atthasdef;
+	int			i_atthasmissing;
 	int			i_attidentity;
 	int			i_attisdropped;
 	int			i_attlen;
@@ -8103,6 +8104,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_attoptions;
 	int			i_attcollation;
 	int			i_attfdwoptions;
+	int			i_attmissingval;
 	PGresult   *res;
 	int			ntups;
 	bool		hasdefaults;
@@ -8132,7 +8134,33 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 
 		resetPQExpBuffer(q);
 
-		if (fout->remoteVersion >= 10)
+		if (fout->remoteVersion >= 11)
+		{
+			/* atthasmissing and attmissingval are new in 11 */
+			appendPQExpBuffer(q, "SELECT a.attnum, a.attname, a.atttypmod, "
+			  "a.attstattarget, 

Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andres Freund
On 2018-06-19 22:41:26 -0400, Andrew Dunstan wrote:
> This unfortunately crashes and burns if we use DirectFunctionCall3 to call
> array_in, because it uses fn_extra. There is the CallerFInfoFunctionCall
> stuff, but it only has 1 and 2 arg variants, and array_in takes 3. In
> retrospect we should probably have added a 3 arg form - quite a few input
> functions take 3 args. Anything else is likely to be rather uglier.
> 
> Attaching the failing patch. I'll attack this again in the morning.

Why don't you just use OidFunctionCall3? Or simply an explicit
fmgr_info(), InitFunctionCallInfoData(), FunctionCallInvoke() combo?

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andrew Dunstan



On 06/19/2018 01:19 PM, Tom Lane wrote:

Andres Freund  writes:

On 2018-06-19 12:37:52 -0400, Tom Lane wrote:

The problem here is that that function does not exist in 11beta1.
Since adding the "incoming" function is certainly going to require
initdb, we have to be able to dump from the server as it now stands,
or we'll be cutting existing beta testers adrift.

It'd probably not be too hard to write a plpgsql replacement for it,
should it come to that. Obviously it'd be nicer to not require users to
create that, but ...

After some thought, I think it's not that hard to get the support function
to accept the anyarray string form.  I was worried about issues like
whether float8 values would restore exactly, but really that's no worse
than a dump/reload today.  Basically, the support function would just need
to extract the target attribute's type and typmod from the pg_attribute
row, then call array_in().



This unfortunately crashes and burns if we use DirectFunctionCall3 to 
call array_in, because it uses fn_extra. There is the 
CallerFInfoFunctionCall stuff, but it only has 1 and 2 arg variants, and 
array_in takes 3. In retrospect we should probably have added a 3 arg 
form - quite a few input functions take 3 args. Anything else is likely 
to be rather uglier.


Attaching the failing patch. I'll attack this again in the morning.

cheers

andrew



--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/adt/pg_upgrade_support.c b/src/backend/utils/adt/pg_upgrade_support.c
index 0c54b02..d9474db 100644
--- a/src/backend/utils/adt/pg_upgrade_support.c
+++ b/src/backend/utils/adt/pg_upgrade_support.c
@@ -11,14 +11,18 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
+#include "access/htup_details.h"
 #include "catalog/binary_upgrade.h"
+#include "catalog/indexing.h"
 #include "catalog/namespace.h"
 #include "catalog/pg_type.h"
 #include "commands/extension.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/builtins.h"
-
+#include "utils/rel.h"
+#include "utils/syscache.h"
 
 #define CHECK_IS_BINARY_UPGRADE	\
 do {			\
@@ -192,3 +196,59 @@ binary_upgrade_set_record_init_privs(PG_FUNCTION_ARGS)
 
 	PG_RETURN_VOID();
 }
+
+Datum
+binary_upgrade_set_missing_value(PG_FUNCTION_ARGS)
+{
+	Oid		table_id = PG_GETARG_OID(0);
+	text*attname = PG_GETARG_TEXT_P(1);
+	text*value = PG_GETARG_TEXT_P(2);
+	DatumvaluesAtt[Natts_pg_attribute];
+	bool nullsAtt[Natts_pg_attribute];
+	bool replacesAtt[Natts_pg_attribute];
+	Datummissingval;
+	Form_pg_attribute attStruct;
+	Relationattrrel;
+	HeapTuple   atttup;
+
+	CHECK_IS_BINARY_UPGRADE;
+
+	/* Lock the attribute row and get the data */
+	attrrel = heap_open(AttributeRelationId, RowExclusiveLock);
+	atttup = SearchSysCacheAttName(table_id,text_to_cstring(attname));
+	if (!HeapTupleIsValid(atttup))
+		elog(ERROR, "cache lookup failed for attribute %s of relation %u",
+			 text_to_cstring(attname), table_id);
+	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
+
+
+	/* get an array value from the value string */
+	missingval = DirectFunctionCall3(array_in,
+	 CStringGetDatum(text_to_cstring(value)),
+	 ObjectIdGetDatum(attStruct->atttypid),
+	 Int32GetDatum(attStruct->atttypmod));
+
+	/* update the tuple - set atthasmissing and attmissingval */
+	MemSet(valuesAtt, 0, sizeof(valuesAtt));
+	MemSet(nullsAtt, false, sizeof(nullsAtt));
+	MemSet(replacesAtt, false, sizeof(replacesAtt));
+
+	valuesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	replacesAtt[Anum_pg_attribute_atthasmissing - 1] = true;
+	valuesAtt[Anum_pg_attribute_attmissingval - 1] = missingval;
+	replacesAtt[Anum_pg_attribute_attmissingval - 1] = true;
+	nullsAtt[Anum_pg_attribute_attmissingval - 1] = false;
+
+	atttup = heap_modify_tuple(atttup, RelationGetDescr(attrrel),
+			   valuesAtt, nullsAtt, replacesAtt);
+	CatalogTupleUpdate(attrrel, >t_self, atttup);
+
+
+	/* clean up */
+	pfree(DatumGetPointer(missingval));
+
+	heap_close(attrrel, RowExclusiveLock);
+	heap_freetuple(atttup);
+
+	PG_RETURN_VOID();
+}
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index ea2f022..22be3c2 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -8095,6 +8095,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_typstorage;
 	int			i_attnotnull;
 	int			i_atthasdef;
+	int			i_atthasmissing;
 	int			i_attidentity;
 	int			i_attisdropped;
 	int			i_attlen;
@@ -8103,6 +8104,7 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 	int			i_attoptions;
 	int			i_attcollation;
 	int			i_attfdwoptions;
+	int			i_attmissingval;
 	PGresult   *res;
 	int			ntups;
 	bool		hasdefaults;
@@ -8132,7 +8134,33 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
 
 		resetPQExpBuffer(q);
 
-		if (fout->remoteVersion >= 10)
+		if 

Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Tom Lane
"David G. Johnston"  writes:
> On Tue, Jun 19, 2018 at 9:37 AM, Tom Lane  wrote:
>> The problem here is that that function does not exist in 11beta1.
>> Since adding the "incoming" function is certainly going to require
>> initdb, we have to be able to dump from the server as it now stands,
>> or we'll be cutting existing beta testers adrift.

> I was under the impression that we don't promise to support a "v10 -> beta
> -> rc -> final" upgrade path; instead, once final is released people would
> be expected to upgrade "v10 -> v11".

Well, we don't *promise* beta testers that their beta databases will be
usable into production, but ever since pg_upgrade became available we've
tried to make it possible to pg_upgrade to the next beta or production
release.  I do not offhand recall any previous case where we failed to do
so.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread David G. Johnston
On Tue, Jun 19, 2018 at 9:37 AM, Tom Lane  wrote:

> The problem here is that that function does not exist in 11beta1.
> Since adding the "incoming" function is certainly going to require
> initdb, we have to be able to dump from the server as it now stands,
> or we'll be cutting existing beta testers adrift.
>

I was under the impression that we don't promise to support a "v10 -> beta
-> rc -> final" upgrade path; instead, once final is released people would
be expected to upgrade "v10 -> v11".  Under that condition requiring users
to do "v10 -> beta2" instead of "beta1 -> beta2", while annoying, is well
within the realm of possibility and expectation.

David J.


Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andrew Dunstan




On 06/19/2018 01:19 PM, Tom Lane wrote:

Andres Freund  writes:

On 2018-06-19 12:37:52 -0400, Tom Lane wrote:

The problem here is that that function does not exist in 11beta1.
Since adding the "incoming" function is certainly going to require
initdb, we have to be able to dump from the server as it now stands,
or we'll be cutting existing beta testers adrift.

It'd probably not be too hard to write a plpgsql replacement for it,
should it come to that. Obviously it'd be nicer to not require users to
create that, but ...

After some thought, I think it's not that hard to get the support function
to accept the anyarray string form.  I was worried about issues like
whether float8 values would restore exactly, but really that's no worse
than a dump/reload today.  Basically, the support function would just need
to extract the target attribute's type and typmod from the pg_attribute
row, then call array_in().

I wonder though whether there are any interesting corner cases along
this line:

1. Create a column with a fast default.

2. Sometime later, alter the column so that the fast default value
is no longer a legal value.  If the fast default isn't in active use
in the table, the ALTER would go through; but if it does not remove
the attmissingval entry, then ...

3. Subsequently, pg_upgrade fails when the support function tries to
pass the attmissingval entry through the type input function.

The kind of case where this might fail is reducing the allowed
max len (typmod) for a varchar column.  I think ALTER TABLE is
smart enough to not rewrite the table for that, so that there
wouldn't be anything causing the fast default to get removed.





My experimentation showed this causing a rewrite. I think it only skips 
the rewrite if you make the allowed length greater, not smaller.


cheers

andrew





Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Peter Geoghegan
On Tue, Jun 19, 2018 at 10:12 AM, Robert Haas  wrote:
> I have to admit that I think this feature is scary. I'm not sure that
> it was adequately reviewed and tested, and I'm worried this may not be
> the only problem it causes. But this particular problem, as Andres
> says, doesn't seem like anything we can't fix with acceptable risk.

I agree with both points.

-- 
Peter Geoghegan



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Tom Lane
Andres Freund  writes:
> On 2018-06-19 12:37:52 -0400, Tom Lane wrote:
>> The problem here is that that function does not exist in 11beta1.
>> Since adding the "incoming" function is certainly going to require
>> initdb, we have to be able to dump from the server as it now stands,
>> or we'll be cutting existing beta testers adrift.

> It'd probably not be too hard to write a plpgsql replacement for it,
> should it come to that. Obviously it'd be nicer to not require users to
> create that, but ...

After some thought, I think it's not that hard to get the support function
to accept the anyarray string form.  I was worried about issues like
whether float8 values would restore exactly, but really that's no worse
than a dump/reload today.  Basically, the support function would just need
to extract the target attribute's type and typmod from the pg_attribute
row, then call array_in().

I wonder though whether there are any interesting corner cases along
this line:

1. Create a column with a fast default.

2. Sometime later, alter the column so that the fast default value
is no longer a legal value.  If the fast default isn't in active use
in the table, the ALTER would go through; but if it does not remove
the attmissingval entry, then ...

3. Subsequently, pg_upgrade fails when the support function tries to
pass the attmissingval entry through the type input function.

The kind of case where this might fail is reducing the allowed
max len (typmod) for a varchar column.  I think ALTER TABLE is
smart enough to not rewrite the table for that, so that there
wouldn't be anything causing the fast default to get removed.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Robert Haas
On Tue, Jun 19, 2018 at 12:37 PM, Tom Lane  wrote:
> The problem here is that that function does not exist in 11beta1.
> Since adding the "incoming" function is certainly going to require
> initdb, we have to be able to dump from the server as it now stands,
> or we'll be cutting existing beta testers adrift.

That would still be less disruptive than ripping the feature out,
which would be cutting those same users adrift, too, unless I'm
missing something.

I have to admit that I think this feature is scary. I'm not sure that
it was adequately reviewed and tested, and I'm worried this may not be
the only problem it causes. But this particular problem, as Andres
says, doesn't seem like anything we can't fix with acceptable risk.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andres Freund
Hi,

On 2018-06-19 12:37:52 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > And if the default value bugs us,
> > we can easily add a support function that dumps the value without the
> > anyarray adornment?
> 
> The problem here is that that function does not exist in 11beta1.
> Since adding the "incoming" function is certainly going to require
> initdb, we have to be able to dump from the server as it now stands,
> or we'll be cutting existing beta testers adrift.

It'd probably not be too hard to write a plpgsql replacement for it,
should it come to that. Obviously it'd be nicer to not require users to
create that, but ...

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andres Freund
On 2018-06-19 12:17:59 -0400, Andrew Dunstan wrote:
> 
> 
> On 06/19/2018 12:05 PM, Andres Freund wrote:
> > Hi,
> > 
> > On 2018-06-19 11:51:16 -0400, Andrew Dunstan wrote:
> > > My initial thought was that as a fallback we should disable pg_upgrade on
> > > databases containing such values, and document the limitation in the docs
> > > and the release notes. The workaround would be to force a table rewrite
> > > which would clear them if necessary.
> > I personally would say that that's not acceptable. People will start
> > using fast defaults - and you can't even do anything against it! - and
> > suddenly pg_upgrade won't work. But they will only notice that years
> > later, after collecting terrabytes of data in such tables.
> 
> 
> Umm, barring the case that Tom mentioned by then it would just work.

Huh?


> It's not the case that if they put in fast default values today they
> will never be able to upgrade.

How? I mean upgrading and loosing your default values certainly ain't
ok?  And we can't expect users to rewrite their tables, that's why we
added fast default support and why pg_upgrade is used.


> I'd at least like to see what a solution might look like before ruling it
> out. I suspect I can come up with something in a day or so. The work
> wouldn't be wasted.

I think it'd be unacceptable to release v11 without support, but I also
think it's quite possible to just add the necessary logic for v11 if we
put some effort into it. ISTM we've resolved worse issues during beta
than this.

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andres Freund
On 2018-06-19 12:17:56 -0400, Tom Lane wrote:
> Andrew Dunstan  writes:
> > I have no idea how large an actual fix might be. I'll at least start 
> > working on it immediately. I agree it's very late in the day.
> 
> On reflection, it seems like there are two moving parts needed:
> 
> * Add a binary-upgrade support function to the backend, which would take,
> say, table oid, column name, and some representation of the default value;
> 
> * Teach pg_dump when operating in binary-upgrade mode to emit a call to
> such a function for each column that has atthasmissing true.
> 
> The hard part here is how exactly are we going to represent the default
> value.  AFAICS, the only thing that pg_dump could readily lay its hands
> on is the "anyarray" textual representation of attmissingval, which maybe
> is okay but it means more work for the support function.

Isn't that just a few lines of code? And if the default value bugs us,
we can easily add a support function that dumps the value without the
anyarray adornment?


> Too bad we did not just store the value in bytea format.

That still seems the right thing to me, not being able in areasonable
way to inspect the default values in the catalog seems worse.  We could
have added a new non-array pseudo-type as well, but that's a fair bit of
work...

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andrew Dunstan




On 06/19/2018 12:05 PM, Andres Freund wrote:

Hi,

On 2018-06-19 11:51:16 -0400, Andrew Dunstan wrote:

My initial thought was that as a fallback we should disable pg_upgrade on
databases containing such values, and document the limitation in the docs
and the release notes. The workaround would be to force a table rewrite
which would clear them if necessary.

I personally would say that that's not acceptable. People will start
using fast defaults - and you can't even do anything against it! - and
suddenly pg_upgrade won't work. But they will only notice that years
later, after collecting terrabytes of data in such tables.



Umm, barring the case that Tom mentioned by then it would just work. 
It's not the case that if they put in fast default values today they 
will never be able to upgrade.





If we can't fix it properly, then imo we should revert / neuter the
feature.



Have we ever recommended use of pg_upgrade for some manual catalog fix after
release? I don't recall doing so. Certainly it hasn't been common.

No, but why does it matter? Are you arguing we can delay pg_dump support
for fast defaults to v12?




Right now I'm more or less thinking out loud, not arguing anything.

I'd at least like to see what a solution might look like before ruling 
it out. I suspect I can come up with something in a day or so. The work 
wouldn't be wasted.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Tom Lane
Andrew Dunstan  writes:
> I have no idea how large an actual fix might be. I'll at least start 
> working on it immediately. I agree it's very late in the day.

On reflection, it seems like there are two moving parts needed:

* Add a binary-upgrade support function to the backend, which would take,
say, table oid, column name, and some representation of the default value;

* Teach pg_dump when operating in binary-upgrade mode to emit a call to
such a function for each column that has atthasmissing true.

The hard part here is how exactly are we going to represent the default
value.  AFAICS, the only thing that pg_dump could readily lay its hands
on is the "anyarray" textual representation of attmissingval, which maybe
is okay but it means more work for the support function.  Too bad we did
not just store the value in bytea format.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Tom Lane
Andres Freund  writes:
> On 2018-06-19 11:51:16 -0400, Andrew Dunstan wrote:
>> Have we ever recommended use of pg_upgrade for some manual catalog fix after
>> release? I don't recall doing so. Certainly it hasn't been common.

> No, but why does it matter?

We absolutely have, as recently as last month:

* Fix incorrect volatility markings on a few built-in functions
  (Thomas Munro, Tom Lane)

... can be fixed by manually updating these functions' pg_proc
entries, for example ALTER FUNCTION pg_catalog.query_to_xml(text,
boolean, boolean, text) VOLATILE. (Note that that will need to be
done in each database of the installation.) Another option is to
pg_upgrade the database to a version containing the corrected
initial data.

regards, tom lane



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andres Freund
Hi,

On 2018-06-19 11:51:16 -0400, Andrew Dunstan wrote:
> My initial thought was that as a fallback we should disable pg_upgrade on
> databases containing such values, and document the limitation in the docs
> and the release notes. The workaround would be to force a table rewrite
> which would clear them if necessary.

I personally would say that that's not acceptable. People will start
using fast defaults - and you can't even do anything against it! - and
suddenly pg_upgrade won't work. But they will only notice that years
later, after collecting terrabytes of data in such tables.

If we can't fix it properly, then imo we should revert / neuter the
feature.


> Have we ever recommended use of pg_upgrade for some manual catalog fix after
> release? I don't recall doing so. Certainly it hasn't been common.

No, but why does it matter? Are you arguing we can delay pg_dump support
for fast defaults to v12?

Greetings,

Andres Freund



Re: Fast default stuff versus pg_upgrade

2018-06-19 Thread Andrew Dunstan




On 06/19/2018 10:55 AM, Tom Lane wrote:

AFAICS, the fast-default patch neglected to consider what happens if
a database containing columns with active attmissingval entries is
pg_upgraded.  I do not see any code in either pg_dump or pg_upgrade that
attempts to deal with that situation, which means the effect will be
that the "missing" values will silently revert to nulls: they're still
null in the table storage, and the restored pg_attribute entries won't
have anything saying it should be different.

The pg_upgrade regression test fails to exercise such a case.  There is
only one table in the ending state of the regression database that has
any atthasmissing columns, and it's empty :-(.  If I add a table in
which there actually are active attmissingval entries, say according
to the attached patch, I get a failure in the pg_upgrade test.

This is certainly a stop-ship issue, and in fact it's bad enough
that I think we may need to pull the feature for v11.  Designing
binary-upgrade support for this seems like a rather large task
to be starting post-beta1.  Nor do I think it's okay to wait for
v12 to make it work; what if we have to force an initdb later in
beta, or recommend use of pg_upgrade for some manual catalog fix
after release?



Ouch!

I guess I have to say mea culpa.

My initial thought was that as a fallback we should disable pg_upgrade 
on databases containing such values, and document the limitation in the 
docs and the release notes. The workaround would be to force a table 
rewrite which would clear them if necessary.


Have we ever recommended use of pg_upgrade for some manual catalog fix 
after release? I don't recall doing so. Certainly it hasn't been common.


I have no idea how large an actual fix might be. I'll at least start 
working on it immediately. I agree it's very late in the day.


cheers

andrew