Come to think of it - the fact that Sezen requests 11k events also
suggests that there is something wrong in the design scope of either
Zeitgeist or Sezen.

Before we loose our selves on an optimization cache-pruning spree (which
may very well consume a lot of development time for some very small
gains) I think we should figure out why it needs 11k events and if
either ZG needs some API additions or Sezen needs fixing.

Large requests increase memory usage considerably
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.

Status in Zeitgeist Framework: Confirmed

Bug description:
I'm seeing with standalone Sezen that after running it, mem usage of the 
zeitgeist-daemon process goes up from ~13MB to ~40MB, this is understandable as 
when Sezen is starting, it does one big query where it asks for everything 
grouped by most recent subjects and in my case this returns ~11 thousand 
events, so the extra 30MB can be explained by allocating memory for the DBus 

Still, my question is whether Zeitgeist should be at mercy of the applications, 
where nothing prevents them from spiking the memory usage of the core process. 
(I already saw a couple of times zeitgeist using 80-100MB of memory on my 
system). Perhaps there's a way to tell python dbus to free its buffers?

Mailing list:
Post to     :
Unsubscribe :
More help   :

Reply via email to