On 11/29/19 8:30 PM, Vladimir Ryabtsev wrote:
I have a query like this:

query = '''
insert into t
select * from unnest (%s)
as t1(c1 timestamp, c2 int)
'''

The reason of the approach is obviously reducing the number of server roundtrips when inserting many rows.
Usage:

from datetime import datetime
import psycopg2
db = psycopg2.connect('postgres://postgres:******@host/postgres?sslmode=prefer')
cur = db.cursor()
cur.execute(query, ([(datetime.now(), 1), (datetime.now(), 2)],))
db.commit()
db.close()

Recently they needed to extend the column set by a text and a bigint columns. Neither of them works:

from datetime import datetime
import psycopg2
query = '''insert into t
select * from unnest (%s)
as t1(c1 timestamp, c2 int, c3 text, c4 bigint)
'''
db = psycopg2.connect('postgres://postgres@localhost/postgres')
cur = db.cursor()
cur.execute(query, ([(datetime.now(), 1, 'abc', 100), (datetime.now(), 2, 'xyz', 200)],))
db.commit()
db.close()

It throws:
psycopg2.errors.DatatypeMismatch: function return row and query-specified return row do not match DETAIL:  Returned type unknown at ordinal position 3, but query expects text.

The problem is that the library sends the following request to the DBMS:

insert into t
select * from unnest (ARRAY[('2019-11-29T19:48:19.683310'::timestamp, 1, 'abc', 100),('2019-11-29T19:48:19.683320'::timestamp, 2, 'xyz', 200)])
as t1(c1 timestamp, c2 int, c3 text, c4 bigint)

For whatever reason it fails with the above error, but OK, it is related to Postgres, not to psycopg2.
I can make it work by specifying type casts for text and bigint columns:

insert into t
select * from unnest (ARRAY[('2019-11-29T19:48:19.683310'::timestamp, 1, 'abc'::text, 100::bigint),('2019-11-29T19:48:19.683320'::timestamp, 2, 'xyz'::text, 200::bigint)])
as t1(c1 timestamp, c2 int, c3 text, c4 bigint);

On a hunch try changing:

'''insert into t
select * from unnest (%s)
as t1(c1 timestamp, c2 int, c3 text, c4 bigint)'''

to

'''insert into t
select * from unnest (%s)
as t1(c1 timestamp, c2 int, c3 varchar, c4 bigint)'''


But I need a way to make the psycopg2 module to do that. Another workaround would be creation a row type for the desired set of columns and casting %s to this type[], but I would not like to create additional objects in the database because it is pretty much out of my control.

Your help is very appreciated.

P.S. I am aware of other solutions such as execute_batch(), execute_values(), etc. Take this question as a theoretical one, I just want to understand if user is able to control this particular aspect of the module.

psycopg2-binary==2.8.4
Postgres ~ any


--
Adrian Klaver
adrian.kla...@aklaver.com


Reply via email to