I'm writing a DBAPI module for a custom database wrapper. The way I get data from a query is to ask the underlying database it for a chunk of rows, at which point it spits out an unpredictable number (it's a black box to me).
I'm trying to figure out how to implement cursor.fetchmany on top of this, and make it as efficient as possible. There are two limiting cases: 1. Python caller asks for the next 10 rows, but the DB gives the next 100 rows. Obviously, I just cache the extra rows in this case. 2. Python caller asks for the next 100 rows, but the next call to the DB only gives me 10 rows. In case (2), PEP-249 seems to be telling me that I should keep calling the DB to get more rows: The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's arraysize determines the number of rows to be fetched. *The method should try to fetch as many rows as indicated by the size parameter*. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. However, I'd rather implement it slightly differently: "... fetch as many rows as possible without incurring multiple calls to the underlying DB engine, but no more than indicated by the size parameter." Having a loop to make multiple DB calls hidden in the DBAPI module, rather than in the client code, strikes me as a bad approach because it adds extra unpredictable latency. A properly-written client will have to do some looping anyway: # client code to get next 10k rows from the cursor rows = [] while True: # PEP-249 says that fetchmany should do its best to get as many rows as requested # ... but that will mean a similar or identical loop in the DBAPI module more_rows = cur.fetchmany(len(rows) - 10000) if not more_rows: break else: rows += more_rows What do you think? Is there some good reason to adhere strictly to PEP-249? Thanks, Dan
_______________________________________________ DB-SIG maillist - DB-SIG@python.org https://mail.python.org/mailman/listinfo/db-sig