Re: Solr 4.3: node is seen as active in Zk while in recovery mode + endless recovery
The cluster state problem reported above is not an issue - it was caused by our own code. Speaking about the update log - i have noticed a strange behavior concerning the replay. The replay is *supposed* to be done for a predefined number of log entries, but actually it is always done for the whole last 2 tlogs. RecentUpdates.update() reads log within while (numUpdates numRecordsToKeep), while numUpdates is never incremented, so it exits when the reader reaches EOF. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549p4066452.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.3: node is seen as active in Zk while in recovery mode + endless recovery
Consider the following: Solr 4.3, 2 node test cluster, each is a leader. During (or immediately after, before hard commit) indexing I shutdown one of them and restart later. The tlog is about 200Mb size. I see recurring 'Reordered DBQs detected' in the log, seems like an endless loop because THE VERY SAME update query appears thousands of times, runs for a long time now. In the meanwhile, the node is inaccessible (obviously) but in the Zk state it appears as active, NOT in recovery mode or down. It seems that this is caused by a recent changed in ZkController which adds recovery logic into 'register' routine. Regards, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.3: node is seen as active in Zk while in recovery mode + endless recovery
a small change: it's not an endless loop, but a painfully slow processing which includes running a delete query and then insertion. Each document from the tlog takes tens of seconds to process (more than 100 times slower than during normal insertion process) -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549p4065551.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.3: node is seen as active in Zk while in recovery mode + endless recovery
the hard commit is set to about 20 minutes, while ram buffer is 256Mb. We will add more frequent hard commits without refreshing the searcher, that for the tip. from what I understood from the code, for each 'add' command there is a test for a 'delete by query'. if there is an older dbq, it's run after the 'add' operation if its version 'add' version. in my case, there are a lot of documents to be inserted, and a single large DBQ. My question is: shouldn't this be done in bulks? Why is it necessary to run the DBQ after each insertion? Supposedly there are 1000 insertions it's run 1000 times. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-3-node-is-seen-as-active-in-Zk-while-in-recovery-mode-endless-recovery-tp4065549p4065628.html Sent from the Solr - User mailing list archive at Nabble.com.
custom routing in SolrCloud - shard assignment
I'm going to use the implicitdocrouter for sharding. Our sharding is not based on a hashing mechanism. As far as I understand, if I don't provide the numShards parameter, implicit router is used. My question is: Using the implicit routing, how can I assign a new core to a new shard, instead of joining an existing one? Should I provide it explicitly as a part of the solr.xml or parameter to CoreAdmin? Can anyone point me to an explanation of the Overseer/State management stuff? While the general mechanics is clear to me, there are a lot of entry points and small details with little documentation. Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/custom-routing-in-SolrCloud-shard-assignment-tp4057695.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud vs. distributed suggester
In pre-cloud version of SOLR it was necessary to pass shards and shards.qt parameters in order to make /suggest handler work standalone. How should it work in SolrCloud? SpellCheckComponent skips the distributed stage of processing and thus I get suggestions only when I force distrib=false mode. Setting parameters like in previous releases doesn't work either. The only way that worked so far is forcing a 'query' component on the /suggest handler. Is there any other (better) way? Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-vs-distributed-suggester-tp4041859.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud: admin security vs. replication
Hi, There are a lot of posts which talk about hardening the /admin handler with user credentials etc. From the other hand, replication handler wouldn't work if /admin/cores is also hardened. Considering this fact, how could I allow secure external access to the admin interface AND allow proper cluster work? Not setting any security on admin/cores is not an option. Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-admin-security-vs-replication-tp4037337.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud: admin security vs. replication
As long as Core Admin is accessible via HTTP and allows to manipulate Solr cores, it should be secured, regardless of configured path. The difference between securing Admin vs. securing other handlers is that other handlers are accessed by a specific application server(s), and therefore may be easily firewalled etc. Admin interface can (in theory) be accessed from machine other than application server, but I cannot really apply security constraints to it as long as Core Admin is used both internally(replication) and externally (admin web interface JS). Therefore, it's necessary to provide reverse proxy with access control management for secure external access to admin AND internal access. -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-admin-security-vs-replication-tp4037337p4037628.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: language specific fields of text
You should use language detection processor factory, like below: processor class=org.apache.solr.update.processor.LangDetectLanguageIdentifierUpdateProcessorFactory str name=langid.flcontent/str str name=langid.langFieldlanguage/str str name=langid.fallbacken/str *str name=langid.maptrue/str str name=langid.map.flcontent,fullname/str* str name=langid.map.keepOrigtrue/str str name=langid.whitelisten,fr,de,es,ru,it/str str name=langid.threshold0.7/str /processor Once you have defined fields like content_en, content_fr etc., they will be filled in automatically according to the recognized language See http://wiki.apache.org/solr/LanguageDetection -- View this message in context: http://lucene.472066.n3.nabble.com/language-specific-fields-of-text-tp3698985p4031180.html Sent from the Solr - User mailing list archive at Nabble.com.
Cross field highlighting
Hi, I would like to store the document content into a single special field (non indexed, stored only), and create several indexed copy fields (different analysis applied). During highlighting, the analysis definitions of the stored field are used, so that improper or no highlighting is done. Is there any workaround for this? Duplicating the stored field is not a good idea, considering the overwhelming storage overhead. Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/Cross-field-highlighting-tp4031103.html Sent from the Solr - User mailing list archive at Nabble.com.
Dynamic modification of field value
Hi, Suppose the document stored in the index has fields A and B. What would be the best way to alter the value of B after the result set is available? The modified value of B is influenced by the value of A and also by some custom logic based on (custom) SolrCache. Can it be a custom function query? Or a custom SearchComponent? Or a custom response writer? As far as I understand, SearchComponent will not help because documents are written during response serialization (in non distributed case). Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/Dynamic-modification-of-field-value-tp4028234.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrTestCaseJ4 and searcher initialization
Hi, I've written a unit test for a custom search component, which naturally extends the SolrTestCaseJ4. beforeClass() has initCore(), assertU(adoc()) and assertU(commit()) inside. The test creates a SolrQueryRequest via req() and runs h.query(request). In other words, nothing special. I see a rather strange SolrIndexSearcher initialization behavior: 1. 2 searchers are created one after another (OK). 2. When the second searcher finishes initialization, the first is closed (NOT OK?). 3. Afterwards, the second searcher is also closed (REALLY NOT OK). 4. The test query seems to run against the already closed searcher. I noticed this behavior because I have some custom cache in place, which is populated during warmup. 2 instances of the cache are created and both are later closed. After closing, the cache is being used in actual query, and obviously it's already empty. This seems like a race condition. Both searchers are being closed here: if (!alreadyRegistered) { future = searcherExecutor.submit( new Callable() { public Object call() throws Exception { try { // registerSearcher will decrement onDeckSearchers and // do a notify, even if it fails. registerSearcher(newSearchHolder); } catch (Throwable e) { SolrException.log(log, e); } finally { // we are all done with the old searcher we used // for warming... if (currSearcherHolderF!=null) *currSearcherHolderF.decref();* } return null; } } ); } -- View this message in context: http://lucene.472066.n3.nabble.com/SolrTestCaseJ4-and-searcher-initialization-tp4028237.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud - TermsComponent, Suggester etc.
Hi, I need a small clarification on how forwarding to the non-(/select) handler works. When I define a distinct handler /terms with TermsComponent inside (or /suggest with the SpellCheckComponent defined for suggester), the distributed call never works. The reason is simple - the request always gets forwarded to the /select handler of other shards by HttpShardHandler. The workaround is to set the QT parameter *and* SHARDS_QT. My question is: why not simply set the same handler path(/terms) for outgoing shard requests without the additional parameters? Shouldn't it be the default in cluster environment? Thanks Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-TermsComponent-Suggester-etc-tp4019520.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud - configuration management in ZooKeeper
Zookeeper manages not only the cluster state, but also the common configuration files. My question is, what are the exact rules of precedence? That is, when SOLR node will decide to download new configuration files? Will configuration files be updated from ZooKeeper every time the core is refreshed? What if bootstrapping is defined (bootstrap_configdir)? Will the node always try to upload? What are the best practices for production environment? Is it better to use external tool (ZkCLI) to trigger configuration changes? Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-configuration-management-in-ZooKeeper-tp4018432.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: ShardHandler - distribution to non-default request handler doesn't work
The only way I succeeded to forward to the right request handler was: 1. shard.qt = /suggest (shards.qt=%2Fsuggest actually) in query 2.handleSelect='true' in solrconfig 3. NO /select handler in solrconfig Only this combination forces 2 things - shard handler forwards qt=/suggest parameter to other shards AND qt is handled by filter. (Otherwise qt is ignored and the query gets forwarded to the /select handler) Is there a better way of accomplishing this? How else can I retrieve suggestions using a distinct handler? Thanks, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/ShardHandler-distribution-to-non-default-request-handler-doesn-t-work-tp4015855p4016401.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: ShardHandler - distribution to non-default request handler doesn't work
Correction: shard.qt is sufficient, but you cannot define only spellcheck component in requestHandler as it doesn't create shard requests, seems like 'query' handler is a must if you want distributed processing. -- View this message in context: http://lucene.472066.n3.nabble.com/ShardHandler-distribution-to-non-default-request-handler-doesn-t-work-tp4015855p4016409.html Sent from the Solr - User mailing list archive at Nabble.com.
Query elevation component fails
Using SolrCloud release with following configuration: searchComponent name=elevator class=solr.QueryElevationComponent str name=queryFieldTypestring/str str name=config-fileelevate.xml/str /searchComponent requestHandler name=/elevate class=solr.SearchHandler startup=lazy lst name=defaults str name=echoParamsexplicit/str str name=dftext/str /lst arr name=last-components strelevator/str /arr /requestHandler Running the query http://localhost:8080/solr/collection1/elevate?q=evelatedtext constantly getting the following exception: SEVERE: null:java.lang.IndexOutOfBoundsException: Index: 1, Size: 0 at java.util.ArrayList.rangeCheck(Unknown Source) at java.util.ArrayList.get(Unknown Source) at org.apache.solr.common.util.NamedList.getVal(NamedList.java:136) at org.apache.solr.handler.component.ShardFieldSortedHitQueue$ShardComparator.sortVal(ShardDoc.java:217) at org.apache.solr.handler.component.ShardFieldSortedHitQueue$2.compare(ShardDoc.java:255) at org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:159) at org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:101) at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:231) at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:140) at org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:156) at org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:863) at org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:626) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:605) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:309) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:240) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) The lookup is made for the _elevate_ sort field. Should I have such a field in schema? -- View this message in context: http://lucene.472066.n3.nabble.com/Query-elevation-component-fails-tp4015793.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud leader election on single node
setup: 1 node, 4 cores, 2 shards. 15 documents indexed. problem: init stage times out. probable cause: According to the init flow, cores are initialized one by one synchronously. Actually, the main thread waits ShardLeaderElectionContext.waitForReplicasToComeUp until retry threshold, while replica cores are *not* yet initialized, in other words there is no chance other replicas go up in the meanwhile. stack trace: Thread [main] (Suspended) owns: HashMapK,V (id=3876) owns: StandardContext (id=3877) owns: HashMapK,V (id=3878) owns: StandardHost (id=3879) owns: StandardEngine (id=3880) owns: Service[] (id=3881) Thread.sleep(long) line: not available [native method] ShardLeaderElectionContext.waitForReplicasToComeUp(boolean, String) line: 298 ShardLeaderElectionContext.runLeaderProcess(boolean) line: 143 LeaderElector.runIamLeaderProcess(ElectionContext, boolean) line: 152 LeaderElector.checkIfIamLeader(int, ElectionContext, boolean) line: 96 LeaderElector.joinElection(ElectionContext) line: 262 ZkController.joinElection(CoreDescriptor, boolean) line: 733 ZkController.register(String, CoreDescriptor, boolean, boolean) line: 566 ZkController.register(String, CoreDescriptor) line: 532 CoreContainer.registerInZk(SolrCore) line: 709 CoreContainer.register(String, SolrCore, boolean) line: 693 CoreContainer.load(String, InputSource) line: 535 CoreContainer.load(String, File) line: 356 CoreContainer$Initializer.initialize() line: 308 SolrDispatchFilter.init(FilterConfig) line: 107 ApplicationFilterConfig.getFilter() line: 295 ApplicationFilterConfig.setFilterDef(FilterDef) line: 422 ApplicationFilterConfig.init(Context, FilterDef) line: 115 StandardContext.filterStart() line: 4072 StandardContext.start() line: 4726 StandardHost(ContainerBase).addChildInternal(Container) line: 799 StandardHost(ContainerBase).addChild(Container) line: 779 StandardHost.addChild(Container) line: 601 HostConfig.deployDescriptor(String, File, String) line: 675 HostConfig.deployDescriptors(File, String[]) line: 601 HostConfig.deployApps() line: 502 HostConfig.start() line: 1317 HostConfig.lifecycleEvent(LifecycleEvent) line: 324 LifecycleSupport.fireLifecycleEvent(String, Object) line: 142 StandardHost(ContainerBase).start() line: 1065 StandardHost.start() line: 840 StandardEngine(ContainerBase).start() line: 1057 StandardEngine.start() line: 463 StandardService.start() line: 525 StandardServer.start() line: 754 Catalina.start() line: 595 NativeMethodAccessorImpl.invoke0(Method, Object, Object[]) line: not available [native method] NativeMethodAccessorImpl.invoke(Object, Object[]) line: not available DelegatingMethodAccessorImpl.invoke(Object, Object[]) line: not available Method.invoke(Object, Object...) line: not available Bootstrap.start() line: 289 Bootstrap.main(String[]) line: 414 After a while, the session times out and following exception appears: Oct 25, 2012 1:16:56 PM org.apache.solr.cloud.ShardLeaderElectionContext waitForReplicasToComeUp INFO: Waiting until we see more replicas up: total=2 found=0 timeoutin=-95 Oct 25, 2012 1:16:56 PM org.apache.solr.cloud.ShardLeaderElectionContext waitForReplicasToComeUp INFO: Was waiting for replicas to come up, but they are taking too long - assuming they won't come back till later Oct 25, 2012 1:16:56 PM org.apache.solr.common.SolrException log SEVERE: Errir checking for the number of election participants:org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /collections/collection1/leader_elect/shard2/election at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:42) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1249) at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:227) at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:224) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:63) at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:224) at org.apache.solr.cloud.ShardLeaderElectionContext.waitForReplicasToComeUp(ElectionContext.java:276) at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:143) at
ShardHandler - distribution to non-default request handler doesn't work
I tried to define a suggest component as appears in Wiki. I also defined a specific /suggest request handler. This doesn't work in SolrCloud setup, as the query is distributed to the default /select handler instead. Specifically, shard handler gets default urls and other cores forward to /select. setup: 1 node, 4 cores, 2 shards. If I try to define a suggest component as a single one for the handler, the query doesn't get distributed as well. Configuration: searchComponent class=solr.SpellCheckComponent name=suggest lst name=spellchecker str name=namesuggest/str str name=classnameorg.apache.solr.spelling.suggest.Suggester/str str name=lookupImplorg.apache.solr.spelling.suggest.tst.TSTLookup/str str name=fieldtext/str float name=threshold0.005/float str name=buildOnCommittrue/str /lst /searchComponent requestHandler class=org.apache.solr.handler.component.SearchHandler name=/suggest lst name=defaults str name=spellchecktrue/str str name=spellcheck.dictionarysuggest/str str name=spellcheck.onlyMorePopulartrue/str str name=spellcheck.count5/str str name=spellcheck.collatetrue/str /lst arr name=last-components strsuggest/str /arr /requestHandler -- View this message in context: http://lucene.472066.n3.nabble.com/ShardHandler-distribution-to-non-default-request-handler-doesn-t-work-tp4015855.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud - loop in recovery mode
I only started learning the new features, so chances are it's about some misconfiguration. I removed the collection2 from the setup and indexed some files. Now there is another pattern that stucks the init, and it's about the overseer polling the queue: Oct 24, 2012 2:18:52 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@5f00498c main{StandardDirectoryReader(segments_2:3 _0(4.0.0.2):C8)} Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.ZooKeeperServer expire INFO: Expiring session 0x13a92a39200, timeout of 15000ms exceeded Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Processed session termination for sessionid: 0x13a92a39200 Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Got user-level KeeperException when processing sessionid:0x13a92b5f199 type:delete cxid:0x282 zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/overseer_elect/leader Error:KeeperErrorCode = NoNode for /overseer_elect/leader Oct 24, 2012 2:19:04 PM org.apache.solr.common.cloud.SolrZkClient makePath INFO: makePath: /overseer_elect/leader Oct 24, 2012 2:19:04 PM org.apache.solr.cloud.Overseer start INFO: Overseer (id=88544452827217920-akudinov-pc:8080_solr-n_01) starting Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Got user-level KeeperException when processing sessionid:0x13a92b5f199 type:create cxid:0x287 zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for /overseer Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Got user-level KeeperException when processing sessionid:0x13a92b5f199 type:create cxid:0x288 zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for /overseer Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Got user-level KeeperException when processing sessionid:0x13a92b5f199 type:create cxid:0x289 zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for /overseer Oct 24, 2012 2:19:04 PM org.apache.zookeeper.server.PrepRequestProcessor pRequest INFO: Got user-level KeeperException when processing sessionid:0x13a92b5f199 type:create cxid:0x28a zxid:0xfffe txntype:unknown reqpath:n/a Error Path:/overseer Error:KeeperErrorCode = NodeExists for /overseer Oct 24, 2012 2:19:04 PM org.apache.solr.cloud.OverseerCollectionProcessor run INFO: Process current queue of collection creations Oct 24, 2012 2:19:04 PM org.apache.solr.cloud.Overseer$ClusterStateUpdater run INFO: Starting to work on the main queue Can you give a clue of what's happening with it? Now my setup is: collection1 2 shards 4 cores There are several documents in both shards, automatically distributed by solrcloud. -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-loop-in-recovery-mode-tp4015330p4015574.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud - loop in recovery mode
After a little bit of investigation, it's about the searcher warmup that doesn't happen. I see the main thread waiting for the searcher. The warmup query handler is stuck in another thread on the very same lock in getSearcher(), and no notify() is called. If I set the useColdSearcher = true, this obviously doesn't happen and the application starts normally. -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-loop-in-recovery-mode-tp4015330p4015581.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud - loop in recovery mode
It is actually connected to this: https://gist.github.com/2880527 Once you have collation = true + warmup, the init is stuck on wait -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-loop-in-recovery-mode-tp4015330p4015593.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud - loop in recovery mode
The situation can be replayed on solr 4 (solrcloud): 1. Define the warmup query 2. Add spell checker configuration to the /select search handler 3. Set spellcheck.collation = true The server will stuck in init phase due to deadlock. Is there a bug open for this? Actually you cannot get collated spell check results together with a query result. The workaround is one of the following: 1. don't use warmup 2. don't use collation 3. don't define spell check for /select, but define a distinct handler and call it specifically -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-loop-in-recovery-mode-tp4015330p4015622.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud - distributed architecture considerations
Hi, As far as I understand, SolrCloud eliminates the master-slave specifics, and automates both update and search seamlessly. What should I take into account configuring SolrCloud for a large customer with multiple physical locations? I mean, for older Solr I would define master 'close to the data' with batch replication to the search server (slave). I would have several such slaves for different geographical locations as well. How can I ensure (if at all) that search queries do not cross geographical boundaries? As far as I understand, SolrCloud routes to any arbitrary active replica. How can I control the indexing process so that the update request is routed to the closest server? If SolrCloud accidentally elects some remote replica as a current leader, the indexing process will deteriorate due to networking issues; moreover, the update requests will be also bounced back across the network as a part of the online replication process. Do I miss something fundamental in my assumptions/understanding of SolrCloud features? Thanks a lot, Alexey -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-distributed-architecture-considerations-tp4013594.html Sent from the Solr - User mailing list archive at Nabble.com.