A few days ago I started working on the PyGreSQL bindings by adding 
support for bind parameters, prepared statements and server side cursors.

To make a longer story shorter, as I progressed through this task, the 
changes I have been making to PyGreSQL became more and more extensive, 
modifying the original's behavior in incompatible ways. I didn't solve the 
challenge of preserving PyGreSQL's existing behavior, seeing how the 
native PostgreSQL bind parameters syntax ($1, $2, etc) is very different 
from what PyGreSQL simulated (python-style %s or %(var)s). (although it 
would be possible to do with some regexp foo)

I have put the full details about what I did on my page here:

     http://blogs.conary.com/index.php/gafton

The bindings themselves (also including a demo.py that shows how I think 
they should be used) can be downloaded from here:

     http://people.rpath.com/~gafton/pgsql/

I have seen the recent discussions about the DB API 2.0 shortcomings 
when it comes to working with modern databases. I tend to agree that the 
current API serves more of a guideline - it specifies the bare minimum, on 
top of which everybody keeps reinventing the same extensions.

So, what I would like to see covered by a future DB API spec:

- bind parameters. The current pythonesque %(name)s specification is not 
ideal. It requires various levels of pain in escaping the arguments passed 
to a query. Most database backends accept bind parameters, albeit with 
different syntaxes. For code portability, I'd rather parse the SQL query 
and rewrite it to use the proper element for the bound parameter than do 
the crazy escaping on the user input that is currently being done.

(As a side note, my bindings use PostgreSQL's native support to do things 
like cu.execute("select * from foo where id = $1", x), without having to 
worry about escaping x)

- parsed statements. On large loops this is a real gain. For lack of a 
better specification, I currently use something like:
        prepcu = db.prepare("select * from foo where id = $1")
        prepcu.execute(2)
The prepared cursor statement has the same fetchone(), fetchall() and 
other properties of the regular cursors, except it only accepts parameters 
to its execute() and executemany() calls.

- server side cursors. Currently, most bindings for most databases have to 
decide what to do after an cursor.execute() call - do they automatically 
retrieve all the resulting rows in the client's memory, or do they 
retrieve it row by row, pinging the server before every retrieval to get 
more data (hey, not everybody using Oracle ;^). DB API has no support 
for controlling this in a consistent fashion, even though Python has 
solved the issue of dict.items() vs dict.iteritems() a long time ago. 
The application writers should have a choice on how the cursors will 
behave.

(Again, in my bindings, I have added support for db.iteritems() to get a 
cursor that will retrieve rows from the server side in chunks instead of 
all at once. I left db.cursor() to return a cursor which will download all 
results in the client memory after an execute - which seems to be the 
prevailing default)

Cristian
-- 
Cristian Gafton
rPath, Inc.

_______________________________________________
DB-SIG maillist  -  DB-SIG@python.org
http://mail.python.org/mailman/listinfo/db-sig

Reply via email to