On 5/18/06, Andi Vajda <[EMAIL PROTECTED]> wrote:

I don't know that anyone has run timings of these exact operations.
Since PyLucene got de-swigged, crossing the python<->CNI barrier involves:
<>

Thanks.  This makes me think that perhaps I should be using the trunk
version of PyLucene.

Getting rid of the GIL operations, even in carefully selected places, is a
recipe for deadlocks or crashes. Keep in mind that the libgcj/gc finalizer
thread acquires the GIL when it calls into Python for releasing wrapped Python
object references.

I would not suggest that, either.  I think whatever overhead there is
is somewhat inevitable (compared with a blazing-fast native method
call)--which is why I'm curious if anything has built a
high-performance system on top of it.

If you want speed over anything, write the Java class extensions in Java with
extension methods declared native, and provide the native C++ methods via CNI.
If you prefer convenience and the usual python speed is enough, use the
built-in PyLucene extension points.

If you do actual timings, let us know. Thanks !

Definately.  We're still trying to decide whether to use PyLucene.  If
I get the to point of detailed experimentation, I'll definately feed
the results back to this list.

thanks,
-Mike
_______________________________________________
pylucene-dev mailing list
[email protected]
http://lists.osafoundation.org/mailman/listinfo/pylucene-dev

Reply via email to