On Sun, 2004-11-14 at 18:29 -0500, Tom Lane wrote:
> Good analysis.  We can't check earlier than DefineRelation AFAICS,
> because earlier stages don't know about inherited columns.
> 
> On reflection I suspect there are similar issues with SELECTs that have
> more than 64K output columns.  This probably has to be guarded against
> in parser/analyze.c.

You're correct -- we also crash on extremely long SELECT statements.
Another variant of the problem would be a CREATE TABLE that inherits
from, say, 70 relations, each of which has 1,000 columns.

Attached is a patch. Not entirely sure that the checks I added are in
the right places, but at any rate this fixes the three identified
problems for me.

-Neil

--- src/backend/commands/tablecmds.c
+++ src/backend/commands/tablecmds.c
@@ -681,6 +681,23 @@
 	int			child_attno;
 
 	/*
+	 * Check for and reject tables with too many columns. We perform
+	 * this check relatively early for two reasons: (a) we don't run
+	 * the risk of overflowing an AttrNumber in subsequent code (b) an
+	 * O(n^2) algorithm is okay if we're processing <= 1600 columns,
+	 * but could take minutes to execute if the user attempts to
+	 * create a table with hundreds of thousands of columns.
+	 *
+	 * Note that we also need to check that any we do not exceed this
+	 * figure after including columns from inherited relations.
+	 */
+	if (list_length(schema) > MaxHeapAttributeNumber)
+		ereport(ERROR,
+				(errcode(ERRCODE_TOO_MANY_COLUMNS),
+				 errmsg("tables can have at most %d columns",
+						MaxHeapAttributeNumber)));
+
+	/*
 	 * Check for duplicate names in the explicit list of attributes.
 	 *
 	 * Although we might consider merging such entries in the same way that
@@ -979,6 +996,16 @@
 		}
 
 		schema = inhSchema;
+
+		/*
+		 * Check that we haven't exceeded the legal # of columns after
+		 * merging in inherited columns.
+		 */
+		if (list_length(schema) > MaxHeapAttributeNumber)
+			ereport(ERROR,
+					(errcode(ERRCODE_TOO_MANY_COLUMNS),
+					 errmsg("tables can have at most %d columns",
+							MaxHeapAttributeNumber)));
 	}
 
 	/*
--- src/backend/parser/parse_target.c
+++ src/backend/parser/parse_target.c
@@ -106,6 +106,17 @@
 	List	   *p_target = NIL;
 	ListCell   *o_target;
 
+	/*
+	 * Reject tlists that are too long. Ultimately there is a
+	 * correspondence between tlist entries and attributes in a tuple,
+	 * so use MaxHeapAttributeNumber.
+	 */
+	if (list_length(targetlist) > MaxHeapAttributeNumber)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+				 errmsg("target lists can have at most %d entries",
+						MaxHeapAttributeNumber)));
+
 	foreach(o_target, targetlist)
 	{
 		ResTarget  *res = (ResTarget *) lfirst(o_target);
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to