Hi,
I was hoping I could cajole someone into helping me out. I've created
a test project that essentially reads records from a large flat file,
assigns the contents of each line in said file to a pair of models,
and saves the model (that's really all it does). The code reads
something like this:
=============
def asl_import():
from asl.models import Model1 as A
from asl.models import Model2 as P
f = open_as_list('data/A.full', 'r')
while f.readline():
p = P()
a = A()
#assign model attributes to the appropriate
p.first = f[0]
p.second = f[1]
...
p.last = f[n]
a.first = f[n+1]
a.second = f[n+2]
...
a.last = f[m]
p.save()
a.save()
p.a.add(a) #there is a many-to-many between p and a
del(p) #I know I shouldn't have to explicitly delete these,
but I did this in order to
del(a) #try to resolve the memory problems
=============
That's really all it does. If you watch it in top, however, python's
memory consumption rises steadily as the file is processed (f is very
large, so there are many iterations of the loop). I was wondering if
anyone knew where all this memory was going? Is django spawning
objects under the hood somewhere that I need to explicitly delete?
Forgive me if this is a too-simple question, but I have been unable to
figure it out thus far.
-Alex
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Django users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---