On Wed, Nov 9, 2011 at 8:41 PM, Marvin Humphrey <[email protected]> wrote:
>> Correct me if I'm miss-reading
>> the code, but doesn't that mean that serve() will block until it
>> finishes the current search  --  ie, no concurrent searches?
>
> Yes.
>
> FWIW, Searchers are cheap, because we rely heavily on the OS cache rather than
> dump stuff into process RAM.  That aids with multi-process concurrency,
> because adding another worker is inexpensive.

ok, what's the recommended route to facilitate concurrent searches on
a remote search server?  Rewrite LucyX/Remote/SearchServer.pm to fork
off a worker process for each new connection?

Right now off the top of my head I can't think of a way to wrap
LucyX/Remote/SearchServer.pm to achieve this.

Reply via email to