On 14 Mar, 08:51, Malcolm Tredinnick <[EMAIL PROTECTED]> wrote:
>
> I hadn't realised that existing cursors pulled everything over at once
> -- I would have "guessed" (based on no information other than "that's
> the way I would think of doing it") that they would pull a chunk of
> results each time to find a balance between efficiency and memory usage.

In fact, pyPgSQL typically uses server-side cursors automatically, as
far as I recall. In contrast, psycopg2 requires an explicit cursor
name in order to make use of them, as described earlier. Since this
breaks the illusion of DB-API compatibility, I've made a rough patch
to psycopg2 which can be found here:

http://www.initd.org/tracker/psycopg/ticket/158

> I guess there's a bunch of questions to ask about how this performs,
> such as whether destroying the cursor object correctly notifies the
> server that the result can now be freed (otherwise you've transferred
> any memory problems you might have to the server). It sounds interesting
> for some high-volume cases, though.

Transferring the entire result set from the server is highly
undesirable for various parts of an application I am working on. It
was only when I saw that my client process was using 1.5GB that I
investigated and discovered such differences between the cursor
policies of the different database modules, but that information is
critical if you plan to work with more than a few thousand rows at a
time. My interpretation of the DB-API was that fetchone and fetchmany
did precisely what they suggest, not a fetchall behind the scenes, but
I'm clearly not in agreement with everyone on that topic.

Paul


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to django-developers@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to