> On Dec 4, 2016, at 9:32 AM, Justin Pryzby <pry...@telsasoft.com> wrote:
> 
> Our application INSERTs data from external sources, and infrequently UPDATEs
> the previously-inserted data (currently, it first SELECTs to determine whether
> to UPDATE).
> 
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).
> 
> I tried to work around the upsert problem by using pygresql inline=True
> (instead of default PREPAREd statements) but both have the same issue.
> 
> I created a test script which demonstrates the problem (attached).
> 
> It seems to me that there's currently no way to "upsert" such a wide table?

Pardon my intrusion here, but I'm really curious what sort of datum has so many 
attributes?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to