Please open a bug to the psycopg bug tracker: https://github.com/psycopg/psycopg2/issues
specifying your platform (win/linux/other, 32/64 bit). Please also add an idea of the number you are expecting to see (I think we should be able to parse 2*10^9 no problem, if not it's a bug). If possible compile psycopg in debug mode (see http://initd.org/psycopg/docs/install.html#creating-a-debug-build) and report the debug line is printed for a failing case (it should say "_read_rowcount: PQcmdTuples..." looking at the link James has kindly provided). Regardless of the report I'll look into parsing that value without using 'atol()' for next bugfix release. -- Daniele On Thu, Jun 29, 2017 at 11:24 PM, ..: Mark Sloan :.. <mark.a.sl...@gmail.com> wrote: > Hi all, > > Kind of new to a lot things here so if I am way off please correct me. > > using psycopg2 with postgres / greenplum / redshift it's now pretty easy to > have a single query have a affected row count higher than it seems .rowcount > allows for. > > I am pretty sure libpq returns the affected row as a string ("for historical > reasons" according to the pg mailing threads) however when I have a large > update statement (e.g. several billion) I seem to get a .rowcount back that > isn't correct. > > > using the psql client I can't reproduce the affected row count being > incorrect there. > > > any ideas or suggestions? > > > thanks > > -Mark > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > https://mail.python.org/mailman/listinfo/db-sig > _______________________________________________ DB-SIG maillist - DB-SIG@python.org https://mail.python.org/mailman/listinfo/db-sig