I've made some tests tonight, on an activitylog with 18k random web history
The in memory size of the daemon is about 7MiB on startup.
FindEvents(TimeRange.until_now(), ,StorageState.Any, 0,1)
(same query is used by sezen *over and over again*)
the daemon's memory consumption grows to ~38MiB.
After this test I used a pure ZeitgeistEngine instance, and run
engine.find_events(TimeRange.until_now(), ,StorageState.Any, 0,1)
on the same database (without any DBus interaction)
I'm observing the same memory consumption boost.
Conclusion: DBus is doing well, the buffers are freed, it must be something
Next step: try to find a way to track sqlite3_memory_used() from python (or
find any other way to get how much memory is used by sqlite3)
Large requests increase memory usage considerably
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
Status in Zeitgeist Framework: Confirmed
Status in Zeitgeist Extensions: New
I'm seeing with standalone Sezen that after running it, mem usage of the
zeitgeist-daemon process goes up from ~13MB to ~40MB, this is understandable as
when Sezen is starting, it does one big query where it asks for everything
grouped by most recent subjects and in my case this returns ~11 thousand
events, so the extra 30MB can be explained by allocating memory for the DBus
Still, my question is whether Zeitgeist should be at mercy of the applications,
where nothing prevents them from spiking the memory usage of the core process.
(I already saw a couple of times zeitgeist using 80-100MB of memory on my
system). Perhaps there's a way to tell python dbus to free its buffers?
Mailing list: https://launchpad.net/~zeitgeist
Post to : firstname.lastname@example.org
Unsubscribe : https://launchpad.net/~zeitgeist
More help : https://help.launchpad.net/ListHelp