lance bowler wrote on 12/10/11 10:24 PM: > > I had a gander at the ClusterSearcher code and it indeed only accepts > 1 search request at a time - unless my perl grokness is failing me. >
your grok-fu is working fine afaict. :) > At a basic level, a server should accept inbound connections and > fork/thread off and handle each request concurrently (a-la xinetd, > etc). Even if each request takes only 0.2s to complete, 10 such > requests (or 50...) would rapidly push that number up to 2s (or > 10s...) -- or am I not missing the mark here? > > We have spikes of traffic and concurrent searching would leave tens - > or hundreds - of users staring at a rotating wheel while the search > client waits it's turn... bad I'm sorry if I misled you; I didn't mean to suggest that the ClusterSearcher all by itself was able to handle simultaneous connections. I was only clarifying the recent changes from serial to parallel requests to remote shards. My code wraps the PolySearcher and then uses a preforking library to handle concurrent requests (like Nate suggests elsewhere in this thread). See Dezi for example. The SWISH::Prog::Lucy::Searcher class that Dezi relies on could be adapted to use a ClusterSearcher instead of PolySearcher in a pretty straightforward way. -- Peter Karman . http://peknet.com/ . [email protected]
