As far as I remember, the main reason why we dropped this iterator thing being 
part of our API redisign efforts a year ago was that there is no easy (and 
performant) way to do batched database queries. The problem is: we don't have a 
'stable' order of events, just think of a query which returns the last inserted 
events, if new events are inserted between requesting one page and the 
following, how should be handle them?
Client side batching by using FindEventIds, slicing over the result and get the 
actual event objects on demand seems a much more  reasonable approach.

-- 
Large requests increase memory usage considerably
https://bugs.launchpad.net/bugs/624310
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.

Status in Zeitgeist Framework: Confirmed
Status in Zeitgeist Extensions: New

Bug description:
I'm seeing with standalone Sezen that after running it, mem usage of the 
zeitgeist-daemon process goes up from ~13MB to ~40MB, this is understandable as 
when Sezen is starting, it does one big query where it asks for everything 
grouped by most recent subjects and in my case this returns ~11 thousand 
events, so the extra 30MB can be explained by allocating memory for the DBus 
reply.

Still, my question is whether Zeitgeist should be at mercy of the applications, 
where nothing prevents them from spiking the memory usage of the core process. 
(I already saw a couple of times zeitgeist using 80-100MB of memory on my 
system). Perhaps there's a way to tell python dbus to free its buffers?



_______________________________________________
Mailing list: https://launchpad.net/~zeitgeist
Post to     : zeitgeist@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zeitgeist
More help   : https://help.launchpad.net/ListHelp

Reply via email to