Hoss, I use solr as a SolrCluster, the main feature that I use is faceting to do some analytics and normal queries to do free text search and retrieve data using filters.
I don't use any custom plugin or contribute plugin. At the moment I'm importing my data from mysql to solr, I don't use dih, instead I use a custom mechanism. In this import, I don't do hard or soft commits, I relay this responsibility to solr. I don't know if this info is useful but I have a lot of: WARNING: [XXX] PERFORMANCE WARNING: Overlapping onDeckSearchers=2 The cluster is formed by a thousand of collection, I have a collection for each client. My solrconfig: <config> <luceneMatchVersion>LUCENE_40</luceneMatchVersion> <directoryFactory name="DirectoryFactory" class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/> <indexConfig> <ramBufferSizeMB>256</ramBufferSizeMB> <mergeFactor>20</mergeFactor> <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/> <lockType>native</lockType> <!-- Commit Deletion Policy Custom deletion policies can be specified here. The class must implement org.apache.lucene.index.IndexDeletionPolicy. http://lucene.apache.org/java/3_5_0/api/core/org/apache/lucene/index/IndexDeletionPolicy.html The default Solr IndexDeletionPolicy implementation supports deleting index commit points on number of commits, age of commit point and optimized status. The latest commit point should always be preserved regardless of the criteria. --> <!-- <deletionPolicy class="solr.SolrDeletionPolicy"> --> <!-- The number of commit points to be kept --> <!-- <str name="maxCommitsToKeep">1</str> --> <!-- The number of optimized commit points to be kept --> <!-- <str name="maxOptimizedCommitsToKeep">0</str> --> <!-- Delete all commit points once they have reached the given age. Supports DateMathParser syntax e.g. --> <!-- <str name="maxCommitAge">30MINUTES</str> --> <str name="maxCommitAge">60MINUTES</str> <!-- </deletionPolicy> --> <!-- Lucene Infostream To aid in advanced debugging, Lucene provides an "InfoStream" of detailed information when indexing. Setting The value to true will instruct the underlying Lucene IndexWriter to write its debugging info the specified file --> <!-- <infoStream file="INFOSTREAM.txt">false</infoStream> --> </indexConfig> <query> <!-- If true, stored fields that are not requested will be loaded lazily. This can result in a significant speed improvement if the usual case is to not load all stored fields, especially if the skipped fields are large compressed text fields. --> <enableLazyFieldLoading>true</enableLazyFieldLoading> <queryResultWindowSize>1000</queryResultWindowSize> <queryResultMaxDocsCached>3000</queryResultMaxDocsCached> <maxWarmingSearchers>2</maxWarmingSearchers> <useFilterForSortedQuery>true</useFilterForSortedQuery> <filterCache class="solr.FastLRUCache" size="2000" initialSize="1500" autowarmCount="750" cleanupThread="true"/> <queryResultCache class="solr.FastLRUCache" size="2000" initialSize="1500" autowarmCount="750" cleanupThread="true"/> <documentCache class="solr.FastLRUCache" size="20000" initialSize="10000" autowarmCount="0" cleanupThread="true"/> </query> <updateHandler class="solr.DirectUpdateHandler2"> <updateLog> <str name="dir">${solr.data.dir:}</str> </updateLog> <!-- Commit documents definitions --> <autoCommit> <maxDocs>5000</maxDocs> <maxTime>10000</maxTime> </autoCommit> <autoSoftCommit> <maxTime>2500</maxTime> </autoSoftCommit> <maxPendingDeletes>20000</maxPendingDeletes> </updateHandler> <requestDispatcher handleSelect="false"> <requestParsers enableRemoteStreaming="true" multipartUploadLimitInKB="10485760"/> </requestDispatcher> <requestHandler name="/select" class="solr.SearchHandler"/> <!-- request handler that returns indented JSON by default --> <requestHandler name="/query" class="solr.SearchHandler"> <lst name="defaults"> <str name="echoParams">explicit</str> <str name="wt">json</str> <str name="indent">true</str> <str name="df">text</str> </lst> </requestHandler> <!-- realtime get handler, guaranteed to return the latest stored fields of any document, without the need to commit or open a new searcher. The current implementation relies on the updateLog feature being enabled. --> <requestHandler name="/get" class="solr.RealTimeGetHandler"> <lst name="defaults"> <str name="omitHeader">true</str> <str name="wt">json</str> <str name="indent">false</str> </lst> </requestHandler> <requestHandler name="/admin/" class="solr.admin.AdminHandlers" /> <requestHandler name="standard" class="solr.StandardRequestHandler" default="true" /> <requestHandler name="/update" class="solr.UpdateRequestHandler" /> <requestHandler name="/analysis/field" startup="lazy" class="solr.solr.FieldAnalysisRequestHandler" /> <requestHandler name="/analysis/document" class="solr.DocumentAnalysisRequestHandler" startup="lazy" /> <requestHandler name="/replication" class="solr.ReplicationHandler" startup="lazy" /> <!-- Echo the request contents back to the client --> <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" > <lst name="defaults"> <str name="echoParams">explicit</str> <!-- for all params (including the default etc) use: 'all' --> <str name="echoHandler">true</str> </lst> </requestHandler> <requestHandler name="/admin/ping" class="solr.PingRequestHandler"> <lst name="invariants"> <str name="q">rows=0;start=0;omitHeader=true</str> </lst> <lst name="defaults"> <str name="echoParams">NONE</str> </lst> </requestHandler> <!-- config for the admin interface --> <admin> <defaultQuery>id:*</defaultQuery> </admin> </config> Cumprimentos -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Tuesday, April 23, 2013 at 3:13 AM, Chris Hostetter wrote: > > : Can you tell what operations cause this to happen? > > ie: what does your configuration look like? are you using any custom > plugins? what types of features of solr do you use (faceting, grouping, > highlighting, clustering, dih, etc...) ? > > > -Hoss