Re: [HACKERS] pg_upgrade and relkind filtering

2011-12-31 Thread Noah Misch
On Mon, Dec 05, 2011 at 05:06:37PM -0500, Bruce Momjian wrote:
> Pg_upgrade has the following check to make sure the cluster is safe for
> upgrading:
> 
> res = executeQueryOrDie(conn,
> "SELECT n.nspname, c.relname, a.attname
> "
> "FROM   pg_catalog.pg_class c, "
> "   pg_catalog.pg_namespace n, "
> "   pg_catalog.pg_attribute a "
> "WHERE  c.oid = a.attrelid AND "
> "   NOT a.attisdropped AND "
> "   a.atttypid IN ( "
>   " 'pg_catalog.regproc'::pg_catalog.regtype, "
>   " 'pg_catalog.regprocedure'::pg_catalog.regtype, "
>   " 'pg_catalog.regoper'::pg_catalog.regtype, "
>   " 'pg_catalog.regoperator'::pg_catalog.regtype, "
> /* regclass.oid is preserved, so 'regclass' is OK */
> /* regtype.oid is preserved, so 'regtype' is OK */
>   " 'pg_catalog.regconfig'::pg_catalog.regtype, "
>   " 'pg_catalog.regdictionary'::pg_catalog.regtype) AND
> "
>   " c.relnamespace = n.oid AND "
>   " n.nspname != 'pg_catalog' AND "
>   " n.nspname != 'information_schema'");
> 
> Based on a report from EnterpriseDB, I noticed that we check all
> pg_class entries, while there are cases where this is unnecessary
> because there is no data behind the entry, e.g. views.  Here are the
> relkinds supported:
> 
>   #define   RELKIND_RELATION'r'   /* ordinary table */
>   #define   RELKIND_INDEX   'i'   /* secondary index */
>   #define   RELKIND_SEQUENCE'S'   /* sequence object */
>   #define   RELKIND_TOASTVALUE  't'   /* for out-of-line 
> values */
>   #define   RELKIND_VIEW'v'   /* view */
>   #define   RELKIND_COMPOSITE_TYPE  'c'   /* composite type */
>   #define   RELKIND_FOREIGN_TABLE   'f'   /* foreign table */
>   #define   RELKIND_UNCATALOGED 'u'   /* not yet cataloged */
> 
> What types, other than views, can we skip in this query?

RELKIND_UNCATALOGED should never appear on disk, and RELKIND_SEQUENCE and
RELKIND_TOASTVALUE do not allow adding columns or changing column types.  We
might as well keep validating them.  RELKIND_RELATION and RELKIND_INDEX have
storage, so we must check those.

The remaining three relkinds (RELKIND_VIEW, RELKIND_COMPOSITE_TYPE,
RELKIND_FOREIGN_TABLE) have no storage, but all are usable as column types in
other relations that do have storage.  You could skip them iff they're unused
that way, per a check like find_composite_type_dependencies().

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add protransform for numeric, varbit, and temporal types

2011-12-31 Thread Noah Misch
Building on commit 8f9fe6edce358f7904e0db119416b4d1080a83aa, this adds
protransform functions to the length coercions for numeric, varbit, timestamp,
timestamptz, time, timetz and interval.  This mostly serves to make more ALTER
TABLE ALTER TYPE operations avoid a rewrite, including numeric(10,2) ->
numeric(12,2), varbit(4) -> varbit(8) and timestamptz(2) -> timestamptz(4).
The rules for varbit are exactly the same as for varchar.  Numeric is slightly
more complex:

 * Flatten calls to our length coercion function that solely represent
 * increases in allowable precision.  Scale changes mutate every datum, so
 * they are unoptimizable.  Some values, e.g. 1E-1001, can only fit into an
 * unconstrained numeric, so a change from an unconstrained numeric to any
 * constrained numeric is also unoptimizable.

time{,stamp}{,tz} are similar to varchar for these purposes, except that, for
example, plain "timestamptz" is equivalent to "timestamptz(6)".  interval has
a vastly different typmod format, but the principles applicable to length
coercion remain the same.

Under --disable-integer-datetimes, I'm not positive that timestamp_scale() is
always a no-op when one would logically expect as much.  Does there exist a
timestamp such that v::timestamp(2) differs from v:timestamp(2)::timestamp(4)
due to floating point rounding?  Even if so, I'm fairly comfortable calling it
a feature rather than a bug to avoid perturbing values that way.

After these patches, the only core length coercion casts not having
protransform functions are those for "bpchar" and "bit".  For those, we could
only optimize trivial cases of no length change.  I'm not planning to do so.

Thanks,
nm
diff --git a/src/backend/utils/adt/varbit.c b/src/backend/utils/adt/varbit.c
index 3fa8117..e5a03b9 100644
*** a/src/backend/utils/adt/varbit.c
--- b/src/backend/utils/adt/varbit.c
***
*** 18,23 
--- 18,25 
  
  #include "access/htup.h"
  #include "libpq/pqformat.h"
+ #include "nodes/nodeFuncs.h"
+ #include "parser/parse_clause.h"
  #include "utils/array.h"
  #include "utils/varbit.h"
  
***
*** 646,651  varbit_send(PG_FUNCTION_ARGS)
--- 648,686 
  }
  
  /*
+  * varbit_transform()
+  * Flatten calls to our length coercion function that leave the new maximum
+  * length >= the previous maximum length.  We ignore the isExplicit argument,
+  * which only affects truncation.
+  */
+ Datum
+ varbit_transform(PG_FUNCTION_ARGS)
+ {
+   FuncExpr   *expr = (FuncExpr *) PG_GETARG_POINTER(0);
+   Node   *typmod;
+   Node   *ret = NULL;
+ 
+   if (!IsA(expr, FuncExpr))
+   PG_RETURN_POINTER(ret);
+ 
+   Assert(list_length(expr->args) == 3);
+   typmod = lsecond(expr->args);
+ 
+   if (IsA(typmod, Const))
+   {
+   Node   *source = linitial(expr->args);
+   int32   new_typmod = DatumGetInt32(((Const *) 
typmod)->constvalue);
+   int32   old_max = exprTypmod(source);
+   int32   new_max = new_typmod;
+ 
+   if (new_max <= 0 || (old_max >= 0 && old_max <= new_max))
+   ret = relabel_to_typmod(source, new_typmod);
+   }
+ 
+   PG_RETURN_POINTER(ret);
+ }
+ 
+ /*
   * varbit()
   * Converts a varbit() type to a specific internal length.
   * len is the maximum bitlength specified in the column definition.
diff --git a/src/include/catalog/pg_pindex c893c3a..2a71f82 100644
*** a/src/include/catalog/pg_proc.h
--- b/src/include/catalog/pg_proc.h
***
*** 2005,2011  DESCR("convert bitstring to int4");
  
  DATA(insert OID = 1685 (  bitPGNSP PGUID 12 1 0 0 0 f f f 
t f i 3 0 1560 "1560 23 16" _null_ _null_ _null_ _null_ bit _null_ _null_ 
_null_ ));
  DESCR("adjust bit() to typmod length");
! DATA(insert OID = 1687 (  varbit PGNSP PGUID 12 1 0 0 0 f f f 
t f i 3 0 1562 "1562 23 16" _null_ _null_ _null_ _null_ varbit _null_ _null_ 
_null_ ));
  DESCR("adjust varbit() to typmod length");
  
  DATA(insert OID = 1698 (  position   PGNSP PGUID 12 1 0 0 0 f f f 
t f i 2 0 23 "1560 1560" _null_ _null_ _null_ _null_ bitposition _null_ _null_ 
_null_ ));
--- 2005,2013 
  
  DATA(insert OID = 1685 (  bitPGNSP PGUID 12 1 0 0 0 f f f 
t f i 3 0 1560 "1560 23 16" _null_ _null_ _null_ _null_ bit _null_ _null_ 
_null_ ));
  DESCR("adjust bit() to typmod length");
! DATA(insert OID = 3145 ( varbit_transform  PGNSP PGUID 12 1 0 0 0 f f f t f i 
1 0 2281 "2281" _null_ _null_ _null_ _null_ varbit_transform _null_ _null_ 
_null_ ));
! DESCR("transform a varbit length coercion");
! DATA(insert OID = 1687 (  varbit PGNSP PGUID 12 1 0 0 3145 f 
f f t f i 3 0 1562 "1562 23 16" _null_ _null_ _null_ _null_ varbit _null_ 
_null_ _null_ ));
  DESCR("adjust varbit() to typmod length");
  
  DATA(insert OID = 1698 (  position   PGNSP PGUID 12 1 0 0 0 f f f 
t f i 2 0 23 "1560 1560" _null_ 

Re: [HACKERS] alternate psql file locations

2011-12-31 Thread Aidan Van Dyk
On Sat, Dec 31, 2011 at 3:17 PM, Alvaro Herrera
 wrote:
>
> Excerpts from Andrew Dunstan's message of sáb dic 31 12:52:02 -0300 2011:
>> It's not a big thing, but I just found myself in a shared environment
>> wanting to be able to set alternative locations for the psql startup
>> file and history. I know there's the HISTFILE variable, but I can't
>> easily set that automatically unless I can at least have my own .psqlrc.
>> ISTM it should be a fairly simple thing to provide these, via
>> environment variables. Is there general interest in such a thing?
>
> I wanted such a thing mere two weeks ago ...

Generally when I've wanted these things, I just make a new "$HOME" in
my shared user home dir:

export HOME=$HOME/aidan

It's worked for things I've wanted, I haven't tried it for psql stuff

a.

-- 
Aidan Van Dyk                                             Create like a god,
ai...@highrise.ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] alternate psql file locations

2011-12-31 Thread Alvaro Herrera

Excerpts from Andrew Dunstan's message of sáb dic 31 12:52:02 -0300 2011:
> It's not a big thing, but I just found myself in a shared environment 
> wanting to be able to set alternative locations for the psql startup 
> file and history. I know there's the HISTFILE variable, but I can't 
> easily set that automatically unless I can at least have my own .psqlrc. 
> ISTM it should be a fairly simple thing to provide these, via 
> environment variables. Is there general interest in such a thing?

I wanted such a thing mere two weeks ago ...

-- 
Álvaro Herrera 
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] alternate psql file locations

2011-12-31 Thread Andrew Dunstan
It's not a big thing, but I just found myself in a shared environment 
wanting to be able to set alternative locations for the psql startup 
file and history. I know there's the HISTFILE variable, but I can't 
easily set that automatically unless I can at least have my own .psqlrc. 
ISTM it should be a fairly simple thing to provide these, via 
environment variables. Is there general interest in such a thing?


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should I implement DROP INDEX CONCURRENTLY?

2011-12-31 Thread Simon Riggs
On Fri, Dec 30, 2011 at 10:20 PM, Peter Eisentraut  wrote:
> On ons, 2011-08-24 at 11:24 -0700, Daniel Farina wrote:
>> I was poking around at tablecmds and index.c and wonder if a similar
>> two-pass approach as used by CREATE INDEX CONCURRENTLY can be used to
>> create a DROP INDEX CONCURRENTLY, and if there would be any interest
>> in accepting such a patch.
>
> Hmm, it seems I just independently came up with this same concept.  My
> problem is that if a CREATE INDEX CONCURRENTLY fails, you need an
> exclusive lock on the table just to clean that up.  If the table is
> under constant load, you can't easily do that.  So a two-pass DROP INDEX
> CONCURRENTLY might have been helpful for me.

Here's a patch for this. Please review.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/doc/src/sgml/ref/drop_index.sgml b/doc/src/sgml/ref/drop_index.sgml
index 7177ef2..a5abb5b 100644
--- a/doc/src/sgml/ref/drop_index.sgml
+++ b/doc/src/sgml/ref/drop_index.sgml
@@ -21,7 +21,9 @@ PostgreSQL documentation
 
  
 
-DROP INDEX [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
+DROP INDEX
+	[ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
+	CONCURRENTLY name
 
  
 
@@ -50,6 +52,30 @@ DROP INDEX [ IF EXISTS ] name [, ..

 

+CONCURRENTLY
+
+ 
+  When this option is used, PostgreSQL will drop the
+  index without taking any locks that prevent concurrent selects, inserts,
+  updates, or deletes on the table; whereas a standard index drop
+  waits for a lock that locks out everything on the table until it's done.
+  Concurrent drop index is a two stage process. First, we mark the index
+  invalid and then commit the change. Next we wait until there are no
+  users locking the table who can see the invalid index.
+ 
+ 
+  There are several caveats to be aware of when using this option.
+  Only one index name can be specified if the CONCURRENTLY
+  parameter is specified. Only one concurrent index drop can occur on a
+  table at a time and no modifications on the table are allowed meanwhile.
+  Regular DROP INDEX command can be performed within
+  a transaction block, but DROP INDEX CONCURRENTLY cannot.
+  There is no CASCADE option when dropping an index concurrently.
+ 
+
+   
+
+   
 name
 
  
diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c
index 223c097..656af21 100644
--- a/src/backend/catalog/dependency.c
+++ b/src/backend/catalog/dependency.c
@@ -171,8 +171,9 @@ static void reportDependentObjects(const ObjectAddresses *targetObjects,
 	   DropBehavior behavior,
 	   int msglevel,
 	   const ObjectAddress *origObject);
-static void deleteOneObject(const ObjectAddress *object, Relation depRel);
-static void doDeletion(const ObjectAddress *object);
+static void deleteOneObject(const ObjectAddress *object, Relation depRel,
+			int option_flags);
+static void doDeletion(const ObjectAddress *object, int option_flags);
 static void AcquireDeletionLock(const ObjectAddress *object);
 static void ReleaseDeletionLock(const ObjectAddress *object);
 static bool find_expr_references_walker(Node *node,
@@ -254,7 +255,7 @@ performDeletion(const ObjectAddress *object,
 	{
 		ObjectAddress *thisobj = targetObjects->refs + i;
 
-		deleteOneObject(thisobj, depRel);
+		deleteOneObject(thisobj, depRel, 0);
 	}
 
 	/* And clean up */
@@ -276,6 +277,13 @@ void
 performMultipleDeletions(const ObjectAddresses *objects,
 		 DropBehavior behavior)
 {
+	performMultipleDeletionsWithOptions(objects, behavior, 0);
+}
+
+void
+performMultipleDeletionsWithOptions(const ObjectAddresses *objects,
+		 DropBehavior behavior, int option_flags)
+{
 	Relation	depRel;
 	ObjectAddresses *targetObjects;
 	int			i;
@@ -336,13 +344,17 @@ performMultipleDeletions(const ObjectAddresses *objects,
 	{
 		ObjectAddress *thisobj = targetObjects->refs + i;
 
-		deleteOneObject(thisobj, depRel);
+		deleteOneObject(thisobj, depRel, option_flags);
 	}
 
 	/* And clean up */
 	free_object_addresses(targetObjects);
 
-	heap_close(depRel, RowExclusiveLock);
+	/*
+	 * We closed depRel earlier if doing a drop concurrently
+	 */
+	if ((option_flags & OPT_CONCURRENTLY) != OPT_CONCURRENTLY)
+		heap_close(depRel, RowExclusiveLock);
 }
 
 /*
@@ -407,7 +419,7 @@ deleteWhatDependsOn(const ObjectAddress *object,
 		if (thisextra->flags & DEPFLAG_ORIGINAL)
 			continue;
 
-		deleteOneObject(thisobj, depRel);
+		deleteOneObject(thisobj, depRel, 0);
 	}
 
 	/* And clean up */
@@ -950,7 +962,7 @@ reportDependentObjects(const ObjectAddresses *targetObjects,
  * depRel is the already-open pg_depend relation.
  */
 static void
-deleteOneObject(const ObjectAddress *object, Relation depRel)
+deleteOneObject(const ObjectAddress *object, Relation depRel, int option_flags)
 {
 	ScanKeyData key[3];
 	int			nkeys;
@@ -1001,9 +1013,16 @@ deleteOneObject(const