** Branch linked: lp:~thekorn/zeitgeist/db_schema_3
** Branch unlinked: lp:~thekorn/zeitgeist/db_schema_3
--
ZeitgeistEngine.get_events() returns wrong result if there are duplicates in
the ids argument
https://bugs.launchpad.net/bugs/673916
You received this bug notification because you are a
** Branch linked: lp:~thekorn/zeitgeist/db_schema_3
** Branch unlinked: lp:~thekorn/zeitgeist/faster_origin_queries
** Changed in: zeitgeist
Assignee: (unassigned) = Markus Korn (thekorn)
** Changed in: zeitgeist
Status: Triaged = In Progress
--
Using the subj_origin column
Just for a reference, the last usage of the 'row' argument was removed with
https://code.launchpad.net/~zeitgeist/zeitgeist/small-find-events-optimization/+merge/34065
--
https://code.launchpad.net/~seif/zeitgeist/fix-673922/+merge/40734
Your team Zeitgeist Framework Team is requested to review
** Changed in: zeitgeist
Status: Confirmed = Fix Committed
--
ZeitgeistEngine.get_events(rows=x) is broken
https://bugs.launchpad.net/bugs/673922
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
Status in
I think we don't want to merge this branch into lp:zeitgeist until we
understand the whole consequences of this change.
--
https://code.launchpad.net/~thekorn/zeitgeist/fix-672965-opt_timerange_queries/+merge/40412
Your team Zeitgeist Framework Team is requested to review the proposed merge of
Markus Korn has proposed merging lp:~thekorn/zeitgeist/db_schema_3 into
lp:zeitgeist.
Requested reviews:
Zeitgeist Framework Team (zeitgeist)
Related bugs:
#673394 Queries for subj_uri and subj_origin are using no index
https://bugs.launchpad.net/bugs/673394
#673452 Using the subj_origin
** Changed in: zeitgeist
Status: In Progress = Fix Committed
--
Using the subj_origin column of event_view is slower than it should be
https://bugs.launchpad.net/bugs/673452
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to
** Changed in: zeitgeist
Status: In Progress = Fix Committed
--
Queries for subj_uri and subj_origin are using no index
https://bugs.launchpad.net/bugs/673394
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist
I updated my proposal at http://wiki.zeitgeist-
project.com/index.php?title=Aggregation_API
--
Add new aggregate API
https://bugs.launchpad.net/bugs/670358
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
Status
@Siegfried: yes, but it is only redundant for FindEventsStats as
FindEventIdsStats only returns ids and not events. It can't be avoided
in this particular case, as the stats field has to return *someting*.
@Michal: as we already clearified on irc, the mapping is done based on
index, so the first
I don't agree with changing the namepsace for dbus names.
There is simply no good reason why we should make this move. After all the
gnome part is just a random string. And noone on KDE, Windows or whatever
platform should have a problem using this string.
Changing the name, maybe by even using
** Changed in: zeitgeist
Status: New = Triaged
** Changed in: zeitgeist
Importance: Undecided = Critical
** Changed in: zeitgeist
Assignee: (unassigned) = Markus Korn (thekorn)
--
fix for LP: #672965 breaks returns for AJ and Synapse
https://bugs.launchpad.net/bugs/683146
You
** Changed in: zeitgeist
Status: Triaged = Fix Committed
--
fix for LP: #672965 breaks returns for AJ and Synapse
https://bugs.launchpad.net/bugs/683146
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
** Changed in: zeitgeist
Status: Confirmed = Fix Committed
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/680360
Title:
Zeitgeist builds man page for
completely agree, 'officially' adding a 'dbus://'-scheme would also
solve the issue of the telepathy dataprovider where we (until now) are
not sure which value to use in the actor field.
** Changed in: zeitgeist
Importance: Undecided = Wishlist
** Changed in: zeitgeist
Status: New =
Ok, first of all, thanks Harald for the input and your offer, as much as I
appreciate it I don't think we really want to move to some random other build
system.
Installing gnome-common shouldn't be a problem for anyone, because this package
has no gnome specific dependencies.
A downside of
Ohh? Is KDE (or the distro you are using) not using xdg, or why is
~/.cache not there? And if KDE uses something else for such kind of
`local log files` we should use this location instead of creating
XDG_CACHE with a big hammer.
--
You received this bug notification because you are a member of
this is the set of scripts which helped me to create the benchmarks
mentioned in the attached merge proposal
** Attachment added: cache_benchmark.tar.gz
https://bugs.launchpad.net/zeitgeist/+bug/686732/+attachment/1759472/+files/cache_benchmark.tar.gz
--
You received this bug notification
Public bug reported:
I'm talking about
def _register_data_source_python_api(self, *args):
mainloop = gobject.MainLoop()
self.client.register_data_source(*args,
reply_handler=lambda: mainloop.quit(),
Public bug reported:
I was wondering today why the timings for inserting events when running
our testsuite (or my benchmark scripts) are looking much better than the
timings in a 'real' daemon instance.
Some Data:
* inserting 500 events at once in my benchmarks: ~0.09 sec
* inserting the
** Attachment added: test_insert_events.py
https://bugs.edge.launchpad.net/bugs/695311/+attachment/1778807/+files/test_insert_events.py
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
*** This bug is a duplicate of bug 598666 ***
https://bugs.launchpad.net/bugs/598666
@Siegfried, what exactly are you benchmarking here? Eg. how many
different (relevant) values are you using? How do the graphs look if you
have some hundred different actors and mimetypes and a wide range of
*** This bug is a duplicate of bug 598666 ***
https://bugs.launchpad.net/bugs/598666
Ok, my concern is: this script only uses one manifestation and one
interpretation, so dict lookup and sql query (including sqlite's internal
caching) might have the same speed.
What I'm curious to know is:
Review: Needs Information
Siegfried, you added an additional handler for SIGTERM. Can you please
elaborate when we receive such kind of signals. I mean we clearly documented
SIGHUB as a tool for distributors, and I would like to see the same
documentation for SIGTERM too.
--
After re-reading this bugreport, the discussion it includes and the merge
proposal, I'm still not confident that the API which is proposed here is good
enough. This is why I started working on a blacklist API spec.
It is still work in progress, and I'm not sure if this points in the right
My first idea related to RegEx based matches was that I thought we need
something very powerful there. But maybe I'm wrong, So I'm happy to
change my proposal there and only allow prefix-search and negation. And
if we need something more powerful later on we can always change it.
--
You received
@Siegfried, how do you want to identify a template in the collection of
all templates, for example, what do you propose as an argument for the
RemoveTemplate() method?
Do you want automatically generated identifier (e.g. auto-integer-id,
maybe in combination with sender string)? or do you have a
Seif sent me an activity.log which he managed to create by [...]
randomly clicking in GAJ and synapse[...]. This log is broken and
results in the above mentioned KeyError.
The db has a few broken entries: http://paste.ubuntu.com/553188/
--
You received this bug notification because you are a
While reading the python dbus docs [0] again I found out about the
byte_array=True, this will solve all our problems immediately.
By using this switch in the @dbus.service.method and @dbus.service.signal
decorators, ay is not exposed as dbus.Array of dbus.Bytes anymore, but as
dbus.ByteArray.
/main/view/head:/aptdaemon/core.py#L150
** Affects: zeitgeist
Importance: Undecided
Assignee: Markus Korn (thekorn)
Status: New
** Changed in: zeitgeist
Assignee: (unassigned) = Markus Korn (thekorn)
--
You received this bug notification because you are a member of Zeitgeist
I *feel* that our solution could look much simpler than the one from aptdaemon,
Let me just try and see how this can be implemented, we can mark this bug as
won't fix laterone if it turns out to be too complex, slow or hackish compared
to the gain.
--
You received this bug notification because
Markus Korn has proposed merging lp:~thekorn/zeitgeist/dbus-inspect-properties
into lp:zeitgeist.
Requested reviews:
Zeitgeist Framework Team (zeitgeist)
For more details, see:
https://code.launchpad.net/~thekorn/zeitgeist/dbus-inspect-properties/+merge/46890
* Changed dbus introspection
** Branch linked: lp:~thekorn/zeitgeist/dbus-inspect-properties
** Changed in: zeitgeist
Status: New = In Progress
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
I'm going to change this branch such that we only use one xml parser, minidom
or etree, thanks Mikkel for pointing this out.
--
https://code.launchpad.net/~thekorn/zeitgeist/dbus-inspect-properties/+merge/46890
Your team Zeitgeist Framework Team is requested to review the proposed merge of
Oh yeah, of course. Indeed, good work Siegfried.
While reading the last few comments one possible fix came into my mind:
Let's maintain a (temporary) helper table called '_fix_cache' with (table_name
VARCHAR, id INTEGER) and create a 'BEFORE DELETE' trigger on each cached table
(interpretation,
I've added a prototype of my idea as described in my last comment, if we want
to do it this way, we have to bump the sql-schema-version, I guess.
I'm not completely sure if this is the way to go, but given the fact that the
number of entries in the _fix_cache table should be *very* low over time
Public bug reported:
We do not know if the testcases which time out (I currently have towo of them)
really pass, so we should not mark them as such.
In newer versions of python we should use SKIP, but I guess FAILED is fine too.
Alternativly we should invest some time to find out why they time
Thanks for the reviews, I addressed them in my last commits to this
branch, I still need to test if the TEMP triggers are working properly.
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
** Changed in: zeitgeist
Assignee: (unassigned) = Markus Korn (thekorn)
** Changed in: zeitgeist
Status: New = Fix Committed
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https
Hi David,
thanks for using zeitgeist, and reporting this bug.
Can you please open a terminal tell us what the output of the following command
gives you?
$ python -c import time; print time.gmtime(0)
this is finally a reproduction of bug 512166 which we closed because
noone was able to reproduce
This looks *very* weird to me, I'll investigate later this week.
** Visibility changed to: Public
** Changed in: zeitgeist (Ubuntu)
Status: New = Triaged
** Also affects: zeitgeist
Importance: Undecided
Status: New
** Changed in: zeitgeist
Assignee: (unassigned) = Markus
@Everyone who is affected by this bug,
could you please update your (natty) system and check if you have at least
zeitgeist-datahub 0.7.0-0ubuntu1 installed, all these python and timezone
related issues should be fixed in the new implementation of the datahub.
Please open a new bug if you have
I can reproduce this issue in my natty installation.
@Michal do you have an idea what's going on, maybe sth. wrong in the datahub?
** Also affects: zeitgeist-datahub
Importance: Undecided
Status: New
** Changed in: zeitgeist (Ubuntu)
Status: New = Confirmed
--
You received
Ok, now I know what's going on here http://paste.ubuntu.com/586129/
We just have to manage the case in the daemon that the datahub exists
immediately on daemon startup, because an already running datahub was found. We
can fix this in a few steps, first ist that the datahub should return an exit
** Changed in: zeitgeist
Assignee: (unassigned) = Markus Korn (thekorn)
** Changed in: zeitgeist
Status: Triaged = In Progress
** Changed in: zeitgeist-datahub
Status: Confirmed = Invalid
--
You received this bug notification because you are a member of Zeitgeist
Framework
Public bug reported:
Instead of exposing the RuntimeError to our users we should simply exit
with a status !=0 and log the traceback.
** Affects: zeitgeist
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Zeitgeist
Framework
Hi @all,
we discussed this issue in the #zeitgeist IRC channel, our theory is: it's
related to encrypted homedirs, and that this dirs get unmounted before the
zeitgeist daemon stops. So the daemon is unable to write to $XDG_DATA_HOME.
We'll investigate further, and keep you updated.
Thanks for
Markus Korn has proposed merging
lp:~thekorn/zeitgeist/fix-598666-remove_cache_entry into lp:zeitgeist.
Requested reviews:
Zeitgeist Framework Team (zeitgeist)
Related bugs:
Bug #598666 in Zeitgeist Framework: Invalid cache access (was: Error when
trying to fetch items)
https
The proposal to merge lp:~thekorn/zeitgeist/fix-598666-remove_cache_entry into
lp:zeitgeist has been updated.
Description changed to:
This branch adds a fix/workaround for bug 598666, where the caches are not
updated properly if one cached item gets deleted in the database.
I was investigating on this bug a bit further. All Tracebacks are
looking like
Traceback (most recent call last):
File /usr/bin/zeitgeist-daemon, line 180, in module
handle_exit()
File /usr/bin/zeitgeist-daemon, line 126, in handle_exit
interface.Quit()
File
revno: 1714
fixes bug(s): https://launchpad.net/bugs/739780
committer: Markus Korn thek...@gmx.de
branch nick: trunk
timestamp: Thu 2011-04-14 09:29:21 +0200
message:
Use glib.spawn_async() to launch the datahub instead
10:17 thekorn RainCT: so the bug is: COUNT(boo) is working, but the
additional
sorting by timestamp is not?
10:18 RainCT thekorn: Right. So the question is if we want it to work (and
if so I'll
fix it together with a related bug)
10:19 thekorn RainCT: yes,
I'm not a big friend of the idea to allow someone else than the daemon
itself modify config files like datasources.(pickle|json), because this
would require the daemon to watch-out for changes to all config files it
and its extensions are using.
libzeitgeist and the datahub should instead defer
I'm with Siegfried when it comes to add an extra encryption layer on top
of the db, basically I fail to understand why putting the db in an
encrypted filesystem is not good enough.
But what I find interesting is the idea of limiting the ability to
access the activity log to system-wide installed
The problem is not in zeitgeist itself, but in the recent version of
raptor2-utils in ubuntu oneiric.
See http://bugs.librdf.org/mantis/view.php?id=451 for the upstream bugreport.
** Bug watch added: librdf Mantis #451
http://bugs.librdf.org/mantis/view.php?id=451
--
You received this bug
Affected version is raptor2-utils = 2.0.2
--
You received this bug notification because you are a member of Zeitgeist
Framework Team, which is subscribed to Zeitgeist Framework.
https://bugs.launchpad.net/bugs/807076
Title:
raptor2 not supported
Status in Zeitgeist Framework:
Triaged
It is not *very* slow, it just seems to be at bit slower than it should be.
From reading the query plan the reason seems to be that two temp b-trees are
created, which is *slow*
sele order from deta
-
1 0 0 SCAN TABLE event USING
This is fixed in the attached branch, please review and merge ;)
** Changed in: gnome-activity-journal
Status: New = In Progress
** Changed in: gnome-activity-journal
Assignee: (unassigned) = Markus Korn (thekorn)
--
Performance issue: CairoHistogram is connecting to expose_event
** Changed in: gnome-activity-journal
Status: In Progress = Fix Released
--
Performance issue: CairoHistogram is connecting to expose_event over and over
again, without disconnecting
https://bugs.launchpad.net/bugs/507377
You received this bug notification because you are a member of
Public bug reported:
User report of GAJ running on gentoo, via #zeitgeist
$ gnome-activity-journal
ImportError: could not import bonobo.ui
Traceback (most recent call last):
File /usr/bin/gnome-activity-journal, line 91, in module
from src.main import Portal
File
of the folder icon
in this case, and also open this branch in `bzr viz` or such.
** Affects: gnome-activity-journal
Importance: Wishlist
Assignee: Markus Korn (thekorn)
Status: Confirmed
--
Support custom icons and handler to launch an object
https://bugs.launchpad.net/bugs/522595
You
I've a good idea how to make this happen, will look at it tonight.
** Changed in: gnome-activity-journal
Importance: Undecided = Wishlist
** Changed in: gnome-activity-journal
Status: New = Confirmed
** Changed in: gnome-activity-journal
Assignee: (unassigned) = Markus Korn
201 - 262 of 262 matches
Mail list logo