Hi Lionel, On the Q-side-of-life we have a Live model implemented on top of QSparql (which wraps libtracker-sparql): https://maemo.gitorious.org/maemo-af/libqtsparql-tracker/blobs/master/src/live/trackerlivequery.h
Here a very good introduction: http://people.igalia.com/aperez/files/qsparql.pdf Doing a live model can be very complex if you try to solve all possible corner cases. That one is the best compromise we have found between simplicity and functionality. Hope it helps. Regards, Ivan On Wed, Mar 30, 2011 at 7:41 PM, Lionel Landwerlin <[email protected]> wrote: > On Wed, 2011-03-30 at 18:13 +0200, Philip Van Hoof wrote: >> On Wed, 2011-03-30 at 16:44 +0100, Lionel Landwerlin wrote: >> >> Hi, >> >> > I'm using tracker from an application to browse/search media files. >> > To support live update when searching/browsing I'm using some kind of >> > live model. Adding a lot of files in a short period of time leads to >> > lots of sparql request to maintain the models up to date and that kind >> > of break the whole thing because at some point : >> > * when using the direct backend, I end up with sqlite errors >> > telling me that the database is corrupted >> > * when using the bus backend, I end up not being able to open >> > file descriptors anymore because the per process limit has been >> > reached >> > >> > So to work around the first and then the second issue, I'm about to >> > write some "private" (as not in tracker) queueing API on top of >> > libtracker-sparql. >> > >> > I'm pretty sure other people might be interested by such a feature/api. >> > Is there any plan to add such thing to tracker ? >> >> Ideally the bus backend of libtracker-sparql is someday redesigned to >> reuse the FD instead of creating a new one per request. >> >> This would then work the way typical pipelining works: >> >> Client just sends requests as they are needed, adds a tag to each >> request. Service sends tagged replies back. Client would read the tag of >> each reply it sees on the FD and fires the callback of the request >> tagged that way. >> >> Non-trivial but in my opinion better than creating a new dup() and a new >> pipe() for each request (which of course exhausts the max open file >> descriptors after having done sufficient requests - 1024 on a standard >> distribution that sets ulimit per shell, I think). >> >> Then no such clientside queue would be needed. > > Ok, thanks. > > Regards, > > -- > Lionel Landwerlin > > > _______________________________________________ > tracker-list mailing list > [email protected] > http://mail.gnome.org/mailman/listinfo/tracker-list > _______________________________________________ tracker-list mailing list [email protected] http://mail.gnome.org/mailman/listinfo/tracker-list
