Re: NFS Stale handle in a distributed SOLR deployment

2007-09-13 Thread Walter Underwood
The straightforward solution is to not put your indexes on NFS. It is
slow and it causes failures like this.

I'm serious about that. I've seen several different search engines
(not just Solr/Lucene) get very slow and unreliable when the indexes
were on NFS.

wunder

On 9/13/07 10:59 AM, Kasi Sankaralingam [EMAIL PROTECTED] wrote:

 Hi All,
 
  
 
 I have multiple SOLR instances in read only mode accessing index files
 on a NFS partition, I run the indexing/updating of indexing from another
 SOLR
 
 Instance. When I run a commit command on the search servers to warm up
 the searchers after update I get 'NFS stale handle' error message.
 
 The core exception is from Lucene index Reader class. Has anyone seen
 this before ?, what is the solution for this problem?
 
 Thanks,
 
 Kasi




Re: NFS Stale handle in a distributed SOLR deployment

2007-09-13 Thread J.J. Larrea
Sometimes one has to make things work in the environment one is handed (e.g. 
virtualized servers, ALL storage resources resident on a SAN and accessed via 
NFS, read-only mounts on the deployment instances with only the production 
indexers having write access).  While I agree that fast local index storage is 
best, with enough RAM allocated inside the Solr JVM to allow adequate caching 
and enough outside the JVM to allow adequate kernel disk buffering, the 
deleterious effects of network throughput and latency can be reduced, and there 
are public deployed Lucene- and Solr-based sites which operate as described 
above, yet perform acceptably.

To avoid the stale filehandle problem you need to have Solr create a new Lucene 
IndexReader and close the old one, after any update that deletes files, e.g. 
whenever the index is optimized.  This is done in Solr by sending a commit 
message to the search server's update handler, as encapsulated in the 
bin/readercycle script.  You can specify in solrconfig.xml that the indexing 
instance should trigger a script upon every commit or optimize, and that script 
could cause the search servers to readercycle.

Though the Solr Wiki CollectionDistribution and OperationsTools pages are 
written from the standpoint of indexes being efficiently copied to each search 
server directly rather than automagically distributed via NFS, they should 
explain enough about the underlying scripts to get you started:

http://wiki.apache.org/solr/CollectionDistribution
http://wiki.apache.org/solr/SolrOperationsTools

- J.J.

At 11:53 AM -0700 9/13/07, Walter Underwood wrote:
The straightforward solution is to not put your indexes on NFS. It is
slow and it causes failures like this.

I'm serious about that. I've seen several different search engines
(not just Solr/Lucene) get very slow and unreliable when the indexes
were on NFS.

wunder

On 9/13/07 10:59 AM, Kasi Sankaralingam [EMAIL PROTECTED] wrote:

 Hi All,

 

 I have multiple SOLR instances in read only mode accessing index files
 on a NFS partition, I run the indexing/updating of indexing from another
 SOLR

 Instance. When I run a commit command on the search servers to warm up
 the searchers after update I get 'NFS stale handle' error message.

 The core exception is from Lucene index Reader class. Has anyone seen
 this before ?, what is the solution for this problem?

 Thanks,

 Kasi



RE: NFS Stale handle in a distributed SOLR deployment

2007-09-13 Thread Kasi Sankaralingam
Thanks, I ran into the problem when I issued the commit command to the
SOLR search server. 

-Original Message-
From: J.J. Larrea [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 13, 2007 2:29 PM
To: solr-user@lucene.apache.org
Subject: Re: NFS Stale handle in a distributed SOLR deployment

Sometimes one has to make things work in the environment one is handed
(e.g. virtualized servers, ALL storage resources resident on a SAN and
accessed via NFS, read-only mounts on the deployment instances with only
the production indexers having write access).  While I agree that fast
local index storage is best, with enough RAM allocated inside the Solr
JVM to allow adequate caching and enough outside the JVM to allow
adequate kernel disk buffering, the deleterious effects of network
throughput and latency can be reduced, and there are public deployed
Lucene- and Solr-based sites which operate as described above, yet
perform acceptably.

To avoid the stale filehandle problem you need to have Solr create a new
Lucene IndexReader and close the old one, after any update that deletes
files, e.g. whenever the index is optimized.  This is done in Solr by
sending a commit message to the search server's update handler, as
encapsulated in the bin/readercycle script.  You can specify in
solrconfig.xml that the indexing instance should trigger a script upon
every commit or optimize, and that script could cause the search servers
to readercycle.

Though the Solr Wiki CollectionDistribution and OperationsTools pages
are written from the standpoint of indexes being efficiently copied to
each search server directly rather than automagically distributed via
NFS, they should explain enough about the underlying scripts to get you
started:

http://wiki.apache.org/solr/CollectionDistribution
http://wiki.apache.org/solr/SolrOperationsTools

- J.J.

At 11:53 AM -0700 9/13/07, Walter Underwood wrote:
The straightforward solution is to not put your indexes on NFS. It is
slow and it causes failures like this.

I'm serious about that. I've seen several different search engines
(not just Solr/Lucene) get very slow and unreliable when the indexes
were on NFS.

wunder

On 9/13/07 10:59 AM, Kasi Sankaralingam [EMAIL PROTECTED] wrote:

 Hi All,

 

 I have multiple SOLR instances in read only mode accessing index
files
 on a NFS partition, I run the indexing/updating of indexing from
another
 SOLR

 Instance. When I run a commit command on the search servers to warm
up
 the searchers after update I get 'NFS stale handle' error message.

 The core exception is from Lucene index Reader class. Has anyone seen
 this before ?, what is the solution for this problem?

 Thanks,

 Kasi