My application, basically parses log files and presents that using web2py

So each flat file is read line by line and inserted to a sqlite file.
I use a global variable rows to read complete sqlite files later and do 
various filters and present data.
So after creating rows as a global variable, when I read multiple logs, the 
amount of memory would just keep growing.. nothing is freed back.
So, I used exclude to remove all global rows.
This helped quite a bit..
But it seems that it does not release the entire memory.

I was trying to profile the memory, and one can see see memory utilized by 
rows

Line #    Mem usage    Increment   Line Contents
================================================
    96     37.5 MiB      0.0 MiB   @profile
    97                             def read_all_rows():
   100     40.3 MiB      2.8 MiB       read_alerts_rows()
   101     56.6 MiB     16.3 MiB       read_manager_rows()
   102    114.7 MiB     58.0 MiB       read_store_rows()
   103    129.9 MiB     15.3 MiB       read_hydra_rows()
   104    146.0 MiB     16.1 MiB       read_kernel_rows()
   105    170.0 MiB     24.0 MiB       read_gossip_rows()

Total rows after reading manager_rows = 9775
Total rows after reading store_rows = 36378
Total rows after reading gossip_rows = 13763

Line #    Mem usage    Increment   Line Contents
================================================
   111    174.3 MiB      0.0 MiB   @profile
   112                             def clear_all_rows():
   116    174.3 MiB      0.0 MiB       if len(manager_rows) > 0:
   117    174.3 MiB      0.0 MiB           for row in 
alerts_rows.exclude(lambda row: row.id > 0):
   118    174.3 MiB      0.0 MiB               pass
   119    174.3 MiB      0.0 MiB           gc.collect()
   120    174.3 MiB      0.0 MiB           for row in 
manager_rows.exclude(lambda row: row.id > 0):
   121    174.3 MiB      0.0 MiB               pass
   122    163.3 MiB    -11.0 MiB           gc.collect()
   123    163.8 MiB      0.5 MiB           for row in 
store_rows.exclude(lambda row: row.id > 0):
   124    163.8 MiB      0.0 MiB               pass
   125    124.4 MiB    -39.4 MiB           gc.collect()
   126    124.5 MiB      0.1 MiB           for row in 
hydra_rows.exclude(lambda row: row.id > 0):
   127    124.5 MiB      0.0 MiB               pass
   128    114.8 MiB     -9.7 MiB           gc.collect()
   129    114.9 MiB      0.1 MiB           for row in 
kernel_rows.exclude(lambda row: row.id > 0):
   130    114.9 MiB      0.0 MiB               pass
   131    104.2 MiB    -10.7 MiB           gc.collect()
   132    104.5 MiB      0.3 MiB           for row in 
gossip_rows.exclude(lambda row: row.id > 0):
   133    104.5 MiB      0.0 MiB               pass
   134     88.0 MiB    -16.5 MiB           gc.collect()

   140     87.7 MiB      0.0 MiB           gc.collect()
   
Total rows after exclude manager_rows = 0
Total rows after exclude store_rows = 0
Total rows after exclude gossip_rows = 0

We can see that what is taken is not given back.

This was a small set of logs.. but sometimes logs can be larger and total 
memory can go around 1.4Gb..
The program crashes if the memory goes above 1.6/1.8Gb

So not giving back all the used memory, becomes kind of important.

Any ideas or any suggestions would be helpful.

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to