you should listen to Anthony and everybody else ... 1m rows in a single
request is not going to be "speedy" no matter what.
we - developers - are aware that pyDAL isn't convenient for such demands,
and already have on the TODO-list an iterator that will alleviate the
problem.....
However, a small recap comes always in handy.........Small improvements (in
growing order of "disruption" from the normal pyDAL API) can be made doing
a)
db(db.my_large_table.id>0).select(cacheable=True)
cons: you don't have row.update_record() nor row.delete_record()
b)
def myprocessor(rows, fields, colnames, blob_decode=True, cacheable = False
):
return [dict(zip(colnames,row)) for row in rows]
db(db.my_large_table.id>0).select(processor=myprocessor)
cons: you need to trust column types returned by the driver
c)
db.executesql(db(db.my_large_table.id>0)._select()[:-1], as_dict=True)
cons: you need to trust column types returned by the driver, and each row
could have different keys than the ones you're accustomed to
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.