Doug Cutting wrote:
Shawn Gervais wrote:
I was not able to use the literal instructions, as my indexes and segments are in DFS while the document presumes a local filesystem installation

Search performance is not good with DFS-based indexes & segments. This is not recommended.

Yeah, I figured - ignoring network overhead it seems that it would prevent the OS from caching disk pages, no?

Distributed search is not meant for a single merged index, but rather for searching multiple indexes. With distributed search, each node will typically have (a local copy of) a few segments and either a merged index for just those segments, or separate indexes for each segment.

What is the best way to maintain an operational fetch/index and search cluster? It seems that it would help to have a tool that was able to partition existing segments and indexes and export those to the local filesystems of the slave nodes.

Should I coordinate my fetches and indexing so that the resultant segments/indexes are optimal for each of my slave nodes? How do others handle dissimilar search slave nodes?

I'm not sure exactly what you mean by "dissimlar search slave nodes".

But I think our situation is similar. We have a cluster used for crawling, and a cluster used for distributed searching.

We use scripts to extract groups of segments from the Hadoop DFS to a local drive, merge/index them, then set up a distributed search server. The appropriate size of each segment group depends on the # of docs you want to be serving up from each search server - in our case, I think it's about 10M or so. Obviously this varies depending on the amount of RAM/horsepower you have on the server, and your target query performance.

-- Ken
--
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"Find Code, Find Answers"

Reply via email to