#31582: Django template backend allocates model cache even iterator() is used
---------------------------------+--------------------------------------
     Reporter:  Sümer Cip        |                    Owner:  nobody
         Type:  Bug              |                   Status:  closed
    Component:  Template system  |                  Version:  3.0
     Severity:  Normal           |               Resolution:  needsinfo
     Keywords:                   |             Triage Stage:  Unreviewed
    Has patch:  0                |      Needs documentation:  0
  Needs tests:  0                |  Patch needs improvement:  0
Easy pickings:  0                |                    UI/UX:  0
---------------------------------+--------------------------------------

Comment (by Sümer Cip):

 Let me clarify this a bit more.

 I have modified {{{ModelIterable.__iter__}}} like following:

 {{{

 class ModelIterable(BaseIterable):
     """Iterable that yields a model instance for each row."""

     def __iter__(self):
         print("Enter ModelIterable.__iter__",
 tracemalloc.get_traced_memory())
         ...
         print("Leave ModelIterable.__iter__",
 tracemalloc.get_traced_memory())
 }}}

 Above code prints out the current used memory in that function + the peak
 memory reached.
 When I run it without using an {{{iterator()}}} call I get following
 values:

 {{{
 DTL no iterator (7K items)
 Enter ModelIterable.__iter__ (1340851, 1342301)
 Leave ModelIterable.__iter__ (5063404, 5063988)
 }}}

 Now, as you indicated this seems OK because the {{{Comment}}} object is
 created and its values initialized and cached. Let's see what happens if
 we run this with iterator:

 {{{
 DTL iterator (7K items)
 Enter ModelIterable.__iter__ (92488, 92995)
 Leave ModelIterable.__iter__ (5113261, 5219668)
 }}}

 Now, the memory usage and peak starts with a smaller value and somehow the
 memory usage+peak values go from 92KB to 5MB. We have confirmed from
 previous run that 5MB is the memory needed to hold the objects in cache.
 So, either memory is being cached somewhere or we are not freeing memory
 in this function.

 After returning from this function, the memory used+peak returns to
 normal, but we are peaking memory with the growth of items which should
 not be the case for {{{iterator()}}}, right?

 I have verified above code is working fine with Jinja2 engine. See the
 results:

 {{{
 Jinja2 no iterator (7K ITEMS)
 Enter ModelIterable.__iter__ (1362882, 1364332)
 Leave ModelIterable.__iter__ (5080351, 5080935)

 Jinja2 iterator (7K ITEMS)
 Enter ModelIterable.__iter__ (115616, 116123)
 Leave ModelIterable.__iter__ (878651, 1343215)
 }}}

 You can see that when iterator is being used, the memory usage numbers do
 not grow by a factor of 50. It is not reaching the value of 5080351 like
 we have in DTL.


 I am suspecting, somehow the loop in ModelIterable.__init__ is holding the
 reference somewhere else in DTL case, maybe?

 {{{
 for row in compiler.results_iter(results):
             obj = model_cls.from_db(db, init_list,
 row[model_fields_start:model_fields_end])
             ....
 }}}

-- 
Ticket URL: <https://code.djangoproject.com/ticket/31582#comment:2>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.

-- 
You received this message because you are subscribed to the Google Groups 
"Django updates" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-updates+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-updates/064.da377257aae6afe276b31f83b97aada0%40djangoproject.com.

Reply via email to