So i created a shitty extension too see how much memory consumption happens
before things are sent over dbus.
The idea is I ask for ALL events directly from the engine, 10 seconds after
Here are my observations
find_eventids: starts with 16.8 MB and end up with 25.4 MB
find_events: starts with 16.8 MB and end up with 106.2 MB
I think for the first case we cant do much
However for the second case we need to reduce the memory footprint of the Event
and Subject in our datamodel. Maybe by using the __slots__
I also wrote 2 scripts one in python and the other in Vala... Both connect to
the DB and ask for all ids.
Observations are vala uses 6.6 MB and python 12.9...
Maybe we can write our own cursor around the sqlite in C and create
bindings for it to allow us to play with the memory and instead of
returning lists in fetchall we can make it return tuples
** Attachment added: "memory.py"
You received this bug notification because you are a member of Zeitgeist
Extensions, which is the registrant for Zeitgeist Extensions.
Large requests increase memory usage considerably
Status in Zeitgeist Framework:
Status in Zeitgeist Extensions:
I'm seeing with standalone Sezen that after running it, mem usage of
the zeitgeist-daemon process goes up from ~13MB to ~40MB, this is
understandable as when Sezen is starting, it does one big query where
it asks for everything grouped by most recent subjects and in my case
this returns ~11 thousand events, so the extra 30MB can be explained
by allocating memory for the DBus reply.
Still, my question is whether Zeitgeist should be at mercy of the
applications, where nothing prevents them from spiking the memory
usage of the core process. (I already saw a couple of times zeitgeist
using 80-100MB of memory on my system). Perhaps there's a way to tell
python dbus to free its buffers?
Mailing list: https://launchpad.net/~zeitgeist
Post to : firstname.lastname@example.org
Unsubscribe : https://launchpad.net/~zeitgeist
More help : https://help.launchpad.net/ListHelp