On 30 June 2017 at 06:24, ..: Mark Sloan :.. <mark.a.sl...@gmail.com> wrote:
> Hi all,
>
> Kind of new to a lot things here so if I am way off please correct me.
>
> using psycopg2 with postgres / greenplum / redshift it's now pretty easy to
> have a single query have a affected row count higher than it seems .rowcount
> allows for.
>
> I am pretty sure libpq returns the affected row as a string ("for historical
> reasons" according to the pg mailing threads) however when I have a large
> update statement (e.g. several billion) I seem to get a .rowcount back that
> isn't correct.
>
>
> using the psql client I can't reproduce the affected row count being
> incorrect there.
>
>
> any ideas or suggestions?

You might have better luck asking questions about psycopg specifically
on the psycopg mailing list (info at
http://initd.org/psycopg/development/).  I don't think the DB-API
itself has the restriction you're

In answer to your question, the relevant code in psycopg is here:

https://github.com/psycopg/psycopg2/blob/master/psycopg/pqpath.c#L1339-L1350

It is using atol() to convert libpq's string row count to a C long
int.  If you're running on a 32-bit OS or 64-bit Windows, the maximum
value for a long int will be about 2 billion.  Unfortunately, the
atol() function has no way to report an error and has undefined
behaviour on overflow.

That it doesn't use an API that can check for conversion errors sounds
like a bug in psycopg2.

James.
_______________________________________________
DB-SIG maillist  -  DB-SIG@python.org
https://mail.python.org/mailman/listinfo/db-sig

Reply via email to