Re: [GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Tom Lane
Justin Pryzby  writes:
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).

Yeah, this is a bug, but fortunately the fix is pretty trivial.  See

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da05d0ebc637a84ba41a172b32552557ebad199f

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Gmail

> On Dec 4, 2016, at 9:32 AM, Justin Pryzby  wrote:
> 
> Our application INSERTs data from external sources, and infrequently UPDATEs
> the previously-inserted data (currently, it first SELECTs to determine whether
> to UPDATE).
> 
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).
> 
> I tried to work around the upsert problem by using pygresql inline=True
> (instead of default PREPAREd statements) but both have the same issue.
> 
> I created a test script which demonstrates the problem (attached).
> 
> It seems to me that there's currently no way to "upsert" such a wide table?

Pardon my intrusion here, but I'm really curious what sort of datum has so many 
attributes?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general