I'm trying to write some integration tests against SolrCloud for which I'm
setting up a solr instance backed with a zookeeper and pointing it to a
namenode (all in memory using hadoop testing utilities and JettySolrRunner).
I'm getting the following error when I'm trying to create a collection (btw,
the exact same configuration works just fine in dev with solrcloud).

    org.apache.lucene.index.IndexNotFoundException: no segments* file found
in NRTCachingDirectory(HdfsDirectory@2ea2a4e4
lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory@4cf0e472;
maxCacheMB=192.0 maxMergeSizeMB=16.0): files: [HdfsDirectory@6bf4fc1c
lockFactory=org.apache.solr.store.hdfs.hdfslockfact...@51115f81-write.lock]

I'm getting this error when I'm trying to create a collection (precisely,
when solr is actually trying to open a searcher on the new index.). There
are no segment files in the index directory on HDFS. So this error is
expected on opening a searcher on the index but I thought that the segment
file is created the first time (when a collection is being created). 

After some debugging I noticed that the IndexWriter  is being initialized
explicitly with APPEND mode by overriding the default APPEND_CREATE mode,
which means that the segment files won't be created if at least one doesn't
exist. I'm not sure why this is the case and also I may be going down the
wrong path with the error. Again this only happens in my in-memory solrcloud
setup.

Can someone help me with this? Thanks




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-IndexNotFoundException-no-segments-file-HdfsDirectoryFactory-tp4138737.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to