Hi, this is slightly offtopic, but is based on Postgres:

I have a table with 10M rows and I have a Python script using psycopg that needs to look at each row of the table. My current strategy is to do in the Python script

cursor.execute("select acol from atable")
while True:
   ret = cursor.fetchone()
   if not ret: break

However if I understand correctly Postgres will basically try and return *all* the rows of the table as the result set, thus taking a long time and probably running out of memory.

Is there a way I can modify the SQL or do something on the Postgres side, so that I can loop over all the rows in the table?

Thanks,

-------------------------------------------------------------------
Rajarshi Guha  <[EMAIL PROTECTED]>
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04  06F7 1BB9 E634 9B87 56EE
-------------------------------------------------------------------
A bug in the hand is better than one as yet undetected.



---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to