assigned myself to the ticket https://code.djangoproject.com/ticket/16614
will try to give akarai's patch a try
--
You received this message because you are subscribed to the Google Groups
"Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop rece
On 04/11/2011 03:05, Anssi Kääriäinen wrote:
On Nov 4, 3:38 am, Marco Paolini wrote:
Postgresql:
.chunked(): 26716.0kB
.iterator(): 46652.0kB
what if you use .chunked().iterator() ?
Quick test shows that the actual memory used by the queryset is around
1.2Mb. Using smaller fetch size than t
On Nov 4, 3:38 am, Marco Paolini wrote:
> > Postgresql:
> > .chunked(): 26716.0kB
> > .iterator(): 46652.0kB
>
> what if you use .chunked().iterator() ?
Quick test shows that the actual memory used by the queryset is around
1.2Mb. Using smaller fetch size than the default 2000 would result in
les
On Nov 4, 3:29 am, Marco Paolini wrote:
> where/when do we close() cursors, or we rely on cursor __del__()
> implementation?
I guess we should rely on it going away when it happens to go away
(that is, __del__ way).
>
> postgres named cursors can't be used in autocommit mode [1]
I don't know if
On Nov 4, 3:38 am, Marco Paolini wrote:
> what if you use .chunked().iterator() ?
You can't. .chunked() returns a generator. Note that the memory usage
is total memory usage for the process, not for the query. The memory
usage for the query is probably just a small part of the total memory
usage.
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows differ
On 04/11/2011 01:50, Anssi Kääriäinen wrote:
On Nov 4, 1:20 am, Marco Paolini wrote:
time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows differ
On Nov 4, 1:20 am, Marco Paolini wrote:
> time to write some patches, now!
Here is a proof of concept for one way to achieve chunked reads when
using PyscoPG2. This lacks tests and documentation. I think the
approach is sane, though. It allows different database backends to be
able to decide how
...
The SQLite3 shared cache mode seems to suffer from the same problem
than mysql:
"""
At any one time, a single table may have any number of active read-
locks or a single active write lock. To read data a table, a
connection must first obtain a read-lock. To write to a table, a
connection mus
On Nov 3, 11:09 pm, Marco Paolini wrote:
> > Now, calling the .iterator() directly is not safe on SQLite3. If you
> > do updates to objects not seen by the iterator yet, you will see those
> > changes. On MySQL, all the results are fetched into Python in one go,
> > and the only saving is from n
Now, calling the .iterator() directly is not safe on SQLite3. If you
do updates to objects not seen by the iterator yet, you will see those
changes. On MySQL, all the results are fetched into Python in one go,
and the only saving is from not populating the _results_cache. I guess
Oracle will just
On Nov 3, 1:06 am, I wrote:
> I did a little testing. It seems you can get the behavior you want if you
> just do this in PostgreSQL:
> for obj in Model.objects.all().iterator(): # Note the extra .iterator()
> # handle object here.
> I would sure like a verification to this test, I am tired
On Thu, Nov 3, 2011 at 2:14 AM, Javier Guerra Giraldez
wrote:
> this seems to be the case with MyISAM tables; on the InnoDB engine
> docs, it says that SELECT statements don't set any lock, since it
> reads from a snapshot of the table.
>
> on MyISAM, there are (clumsy) workarounds by forcing the
On Wed, Nov 2, 2011 at 11:33 AM, Tom Evans wrote:
>> other connections in other transactions are locked too?
>
> Yes. The exact wording from the C API:
>
> """
> On the other hand, you shouldn't use mysql_use_result() if you are
> doing a lot of processing for each row on the client side, or if th
"""
so, summarizing again:
- mysql supports chunked fetch but will lock the table while fetching is in
progress (likely causing deadlocks)
- postgresql does not seem to suffer this issue and chunked fetch seems
doable (not trivial) using named cursor
- oracle does chunked fetch already (som
.from_tmp()
peace,
Ryan
- Original Message -
From: "Marco Paolini"
To: django-developers@googlegroups.com
Sent: Wednesday, November 2, 2011 12:11:41 PM GMT -06:00 US/Canada Central
Subject: Re: queryset caching note in docs
On 02/11/2011 17:33, Tom Evans wrote:
> On Wed, Nov 2, 20
On 02/11/2011 17:33, Tom Evans wrote:
On Wed, Nov 2, 2011 at 4:22 PM, Marco Paolini wrote:
On 02/11/2011 17:12, Tom Evans wrote:
If you do a database query that quickly returns a lot of rows from the
database, and each row returned from the database requires long
processing in django, and you
On Wed, Nov 2, 2011 at 4:22 PM, Marco Paolini wrote:
> On 02/11/2011 17:12, Tom Evans wrote:
>> If you do a database query that quickly returns a lot of rows from the
>> database, and each row returned from the database requires long
>> processing in django, and you use mysql_use_result, then othe
On 02/11/2011 17:12, Tom Evans wrote:
On Wed, Nov 2, 2011 at 11:28 AM, Marco Paolini wrote:
mysql can do chunked row fetching from server, but only one row at a time
curs = connection.cursor(CursorUseResultMixIn)
curs.fetchmany(100) # fetches 100 rows, one by one
Marco
The downsides to mys
On Wed, Nov 2, 2011 at 11:28 AM, Marco Paolini wrote:
> mysql can do chunked row fetching from server, but only one row at a time
>
> curs = connection.cursor(CursorUseResultMixIn)
> curs.fetchmany(100) # fetches 100 rows, one by one
>
> Marco
>
The downsides to mysql_use_result over mysql_store_
On 02/11/2011 15:18, Anssi Kääriäinen wrote:
On 11/02/2011 01:36 PM, Marco Paolini wrote:
maybe we could implement something like:
for obj in qs.all().chunked(100):
pass
.chunked() will automatically issue LIMITed SELECTs
that should work with all backends
I don't think that will be a perfor
On 02/11/2011 14:36, Ian Kelly wrote:
On Wed, Nov 2, 2011 at 5:05 AM, Anssi Kääriäinen
wrote:
For PostgreSQL this would be a nice feature. Any idea what MySQL and Oracle
do currently?
If I'm following the thread correctly, the oracle backend already does
chunked reads. The default chunk siz
On 11/02/2011 01:36 PM, Marco Paolini wrote:
maybe we could implement something like:
for obj in qs.all().chunked(100):
pass
.chunked() will automatically issue LIMITed SELECTs
that should work with all backends
I don't think that will be a performance improvement - this will get rid
of th
On Wed, Nov 2, 2011 at 5:05 AM, Anssi Kääriäinen
wrote:
> For PostgreSQL this would be a nice feature. Any idea what MySQL and Oracle
> do currently?
If I'm following the thread correctly, the oracle backend already does
chunked reads. The default chunk size is 100 rows, IIRC.
--
You received
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 02/11/2011 12:05, Anssi Kääriäinen wrote:
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, di
On 11/02/2011 12:47 PM, Marco Paolini wrote:
if that option is true, sqlite shoud open one connection per cursor
and psycopg2 should use named cursors
The sqlite behavior leads to some problems with transaction management -
different connections, different transactions (or is there some sort of
On 02/11/2011 10:10, Luke Plant wrote:
On 02/11/11 08:48, Marco Paolini wrote:
thanks for pointing that to me, do you see this as an issue to be fixed?
If there is some interest, I might give it a try.
Maybe it's not fixable, at least I can investigate a bit
Apparently, the protocol between
On 02/11/11 08:48, Marco Paolini wrote:
> thanks for pointing that to me, do you see this as an issue to be fixed?
>
> If there is some interest, I might give it a try.
>
> Maybe it's not fixable, at least I can investigate a bit
Apparently, the protocol between the Postgres client and server o
On 02/11/2011 09:43, Luke Plant wrote:
On 02/11/11 00:41, Marco Paolini wrote:
so if you do this:
for obj in Entry.objects.all():
pass
django does this:
- creates a cursor
- then calls fetchmany(100) until ALL rows are fetched
- creates a list containing ALL fetched rows
- passes th
On 02/11/11 00:41, Marco Paolini wrote:
> so if you do this:
>
> for obj in Entry.objects.all():
> pass
>
> django does this:
> - creates a cursor
> - then calls fetchmany(100) until ALL rows are fetched
> - creates a list containing ALL fetched rows
> - passes this list to queryset instanc
On 28/10/2011 15:55, Tom Evans wrote:
On Fri, Oct 28, 2011 at 1:05 PM, Marco Paolini wrote:
it's a bit more complex: there are basically two phases:
1) row fetching from db using cursor.fetchmany()
2) model instance creation in queryset
both are delayed as much as possible (queryset lazyne
On Fri, Oct 28, 2011 at 1:05 PM, Marco Paolini wrote:
> it's a bit more complex: there are basically two phases:
> 1) row fetching from db using cursor.fetchmany()
> 2) model instance creation in queryset
>
> both are delayed as much as possible (queryset lazyness)
>
> phase two (model instance
On 28/10/2011 12:58, Tom Evans wrote:
On Fri, Oct 28, 2011 at 11:23 AM, Marco Paolini wrote:
Hi all,
I'd like to add a small note of warning about queryset caching in dos,
in topics/db/queries.txt "Caching and QuerySets" section,
something like:
Keep in mind when looping through a queryset an
On Fri, Oct 28, 2011 at 11:23 AM, Marco Paolini wrote:
> Hi all,
>
> I'd like to add a small note of warning about queryset caching in dos,
> in topics/db/queries.txt "Caching and QuerySets" section,
> something like:
>
> Keep in mind when looping through a queryset and altering data that
> might
Hi all,
I'd like to add a small note of warning about queryset caching in dos,
in topics/db/queries.txt "Caching and QuerySets" section,
something like:
Keep in mind when looping through a queryset and altering data that
might be returned by the same queryset, that instances might not
be "fresh"
37 matches
Mail list logo