hehe, did it again (dropping the list from cc). I need to stop
using sup :P thanks Austin.

Excerpts from Carl Worth's message of Thu May 26 18:20:21 +0100 2011:
> On Thu, 26 May 2011 09:31:19 +0100, Patrick Totzke 
> <patricktot...@googlemail.com> wrote:
> > Wow. This reads really complicated. All I want to say is:
> > if I change tags in my search-results view, I get Xapian errors :)
> Yes, that's frustrating. I wish that we had a more reliable interface at
> the notmuch library level. But I'm not entirely sure what would be the
> best way to do this.
Actually, I expected something like this. For this reason each sup instance 
locks its index.
At the moment I'm going for custom wrapper classes around notmuch.Thread
and notmuch.Messages that cache the result of the calls relevant for me.
But the real issue seems to be the iterator:
It takes an awful lot of time just to copy the thread ids of all threads from 
large a query result.

I tried the following in ipython:
 time tids = [t.get_thread_id() for t in q.search_threads()]

which results in
CPU times: user 7.64 s, sys: 2.06 s, total: 9.70 s
Wall time: 9.84 s

It would really help if the Query object could return an iterator
of thread-ids that makes this copying unnecessary. Is it possible to
implement this? Or would this require the same amount of copying to happen
at a lower level?
I have not looked into the code for the bindings or the C code so far,
but I guess the Query.search_threads() translates to some 
"SELECT id,morestuff from threads"
where for me a "SELECT is from threads" would totally suffice. Copying 
(in the C code) only the ids so some list that yields an iterator should be 

> > The question: How do you solve this in the emacs code?
> > do you store all tids of a query? 
> The emacs code does not use the notmuch library interface like your
> python bindings do. Instead, it uses the notmuch command-line tool, (and
> buffers up the text output by it). 
Ahh ok. Thanks for the explanation.

Excerpts from Austin Clements's message of Thu May 26 21:18:53 +0100 2011:
> I proposed a solution to this problem a while ago
> (id:"AANLkTi=kox8atjipkiarfvjehe6zt_jypoasmiiaw...@mail.gmail.com"),
> though I haven't tried implementing it yet.
Sorry, I wasn't on the list back then.

> Though, Patrick, that solution doesn't address your problem.  On the
> other hand, it's not clear to me what concurrent access semantics
> you're actually expecting.  I suspect you don't want the remaining
> iteration to reflect the changes, since your changes could equally
> well have affected earlier iteration results. 
That's right. 
> But if you want a
> consistent view of your query results, something's going to have to
> materialize that iterator, and it might as well be you (or Xapian
> would need more sophisticated concurrency control than it has).  But
> this shouldn't be expensive because all you need to materialize are
> the document ids; you shouldn't need to eagerly fetch the per-thread
> information.  
I thought so, but it seems that Query.search_threads() already
caches more than the id of each item. Which is as expected
because it is designed to return thread objects, not their ids.
As you can see above, this _is_ too expensive for me.

> Have you tried simply calling list() on your thread
> iterator to see how expensive it is?  My bet is that it's quite cheap,
> both memory-wise and CPU-wise.
Funny thing:
 time tlist = list(q.search_threads())
raises a NotmuchError(STATUS.NOT_INITIALIZED) exception. For some reason
the list constructor must read mere than once from the iterator.
So this is not an option, but even if it worked, it would show
the same behaviour as my above test..

would it be very hard to implement a Query.search_thread_ids() ?
This name is a bit off because it had to be done on a lower level.

Attachment: signature.asc
Description: PGP signature

notmuch mailing list

Reply via email to