[jira] [Commented] (SOLR-1545) add support for sort to MoreLikeThis
[ https://issues.apache.org/jira/browse/SOLR-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389610#comment-15389610 ] Erik Hatcher commented on SOLR-1545: Yes, it seems like this is obsolete. With the advent of the MLT query parser, one can sort the results and do anything else one could do with a standard query request. > add support for sort to MoreLikeThis > > > Key: SOLR-1545 > URL: https://issues.apache.org/jira/browse/SOLR-1545 > Project: Solr > Issue Type: Improvement > Components: search >Affects Versions: 1.4 >Reporter: Bill Au >Priority: Minor > Fix For: 4.9, 6.0 > > Attachments: solr-1545-1.4.1.patch, solr-1545.patch > > > Add support for sort to MoreLikeThis. I will attach a patch with more info > shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7389) Validation issue in FieldType#setDimensions?
[ https://issues.apache.org/jira/browse/LUCENE-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn van Groningen updated LUCENE-7389: -- Attachment: LUCENE-7383.patch Attached fix. Luckily this validation was also checked (correctly in FieldInfo.java line 178, so there shouldn't be indices with too large dimensions. > Validation issue in FieldType#setDimensions? > > > Key: LUCENE-7389 > URL: https://issues.apache.org/jira/browse/LUCENE-7389 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen > Attachments: LUCENE-7383.patch > > > It compares if the {{dimensionCount}} is larger than > {{PointValues.MAX_NUM_BYTES}} while this constant should be compared to > {{dimensionNumBytes}} instead? > So this if statement: > {noformat} > if (dimensionCount > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} > Should be: > {noformat} > if (dimensionNumBytes > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389646#comment-15389646 ] Martijn van Groningen commented on LUCENE-7391: --- > is it part of the contract that fields() should only return indexed fields > then? Yes. I think David's fix is the easiest here. Computing this count each time fields is invoked is less of an overhead compared what happens now when building {{MemoryFields}}. Since that count is computed each time, I think you shouldn't worry about caching or cache invalidation. The concurrency aspect of the MemoryIndex is in my opinion a bit of a mess. It allows fields to be added to be made after a reader has been created, except when the freeze method is invoked (and then it should be able to be used from many threads). I think the MemoryIndex class itself should be kind of a builder that just returns an IndexReader and shouldn't be able to be used after an IndexReader instance has been made. > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Mason updated LUCENE-7391: Attachment: LUCENE-7391.patch Patch attached. Note that all unit tests pass. I've also run it through our integration test suite (matching 1000 queries against 45000 documents) and verified that they pass as well It would be good to know why the original code was like this, I wonder if [~martijn.v.groningen] remembers - it seems to be tied to this comment: https://issues.apache.org/jira/browse/LUCENE-7091?focusedCommentId=15189525=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15189525 If the existing behaviour needs to be preserved then that's fine - if someone can provide me with a test case (or explain one to me) then I'll add it to the patch and formulate an alternative solution > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389614#comment-15389614 ] David Smiley commented on LUCENE-7391: -- A simple solution is for fields() to loop and count the number of fields that have numTerms > 0 (don't need to record which), and pass this to MemoryFields so that MemoryFields.size() is easy. Then MemoryFields.terms() can simply check numTerms <= 0 and return null. > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+127) - Build # 1237 - Still unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1237/ Java: 32bit/jdk-9-ea+127 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:35108/c_azk/z/forceleader_test_collection_shard1_replica1] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:35108/c_azk/z/forceleader_test_collection_shard1_replica1] at __randomizedtesting.SeedInfo.seed([E65083B4FB3539BF:C7B774C2B7C0DE]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:131) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17331 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17331/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitForNonexistantCollection Error Message: waitForState was not triggered by collection creation Stack Trace: java.lang.AssertionError: waitForState was not triggered by collection creation at __randomizedtesting.SeedInfo.seed([EEB57637C0F1DF11:4595ED81ADAF1B3A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitForNonexistantCollection(TestCollectionStateWatchers.java:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 13229 lines...] [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers [junit4] 2> Creating dataDir:
[jira] [Created] (LUCENE-7391) MemoryIndexReader.fields() performance regression
Steve Mason created LUCENE-7391: --- Summary: MemoryIndexReader.fields() performance regression Key: LUCENE-7391 URL: https://issues.apache.org/jira/browse/LUCENE-7391 Project: Lucene - Core Issue Type: Bug Reporter: Steve Mason While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant performance regression - a 5x slowdown On profiling the code, the method MemoryIndexReader.fields() shows up as one of the hottest methods Looking at the method, it just creates a copy of the inner {{fields}} Map before passing it to {{MemoryFields}}. It does this so that it can filter out fields with {{numTokens <= 0}}. The simplest "fix" would be to just remove the copying of the map completely, and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes any slowdown caused by this method. It does potentially change behaviour though, but none of the unit tests seem to test that behaviour so I wonder whether it's necessary (I looked at the original ticket LUCENE-7091 that introduced this code, I can't find much in way of an explanation). I'm going to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1545) add support for sort to MoreLikeThis
[ https://issues.apache.org/jira/browse/SOLR-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389529#comment-15389529 ] Chantal Ackermann commented on SOLR-1545: - Is this feature obsolete because there is now another way of sorting MLT results? Or is there just nobody interested enough to work on it? If the latter is the case - would this code: https://github.com/dfdeshom/custom-mlt (Apache 2.0 License) be of any help (not my code)? I'm actually more interested in boosting MLT results but right now I'm following any track I can find. > add support for sort to MoreLikeThis > > > Key: SOLR-1545 > URL: https://issues.apache.org/jira/browse/SOLR-1545 > Project: Solr > Issue Type: Improvement > Components: search >Affects Versions: 1.4 >Reporter: Bill Au >Priority: Minor > Fix For: 4.9, 6.0 > > Attachments: solr-1545-1.4.1.patch, solr-1545.patch > > > Add support for sort to MoreLikeThis. I will attach a patch with more info > shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389624#comment-15389624 ] Steve Mason commented on LUCENE-7391: - OK, thanks Martijn - is it part of the contract that {{fields()}} should only return indexed fields then? There are a couple of options we can think of: 1. As you suggest - cache the filtered Map (or the whole {{MemoryIndexReader}} object?). In that case, we'd need to make sure that the cache was invalidated after every mutation 2. Make {{MemoryFields}} a "view" on the original that filters them out when each of its methods are called (there are only 3) 3. Maybe maintain another Map of just the indexed fields? I'll have a go at some of these and see how I get on > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389623#comment-15389623 ] Martijn van Groningen commented on LUCENE-7391: --- +1 to count the number of fields with `numTerms > 0` and filter out fields with `numTerms <= 0` > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7389) Validation issue in FieldType#setDimensions?
[ https://issues.apache.org/jira/browse/LUCENE-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389661#comment-15389661 ] Adrien Grand commented on LUCENE-7389: -- +1! > Validation issue in FieldType#setDimensions? > > > Key: LUCENE-7389 > URL: https://issues.apache.org/jira/browse/LUCENE-7389 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen > Attachments: LUCENE-7383.patch > > > It compares if the {{dimensionCount}} is larger than > {{PointValues.MAX_NUM_BYTES}} while this constant should be compared to > {{dimensionNumBytes}} instead? > So this if statement: > {noformat} > if (dimensionCount > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} > Should be: > {noformat} > if (dimensionNumBytes > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts
[ https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389669#comment-15389669 ] Erick Erickson commented on SOLR-7280: -- Tests ran find last night FWIW... > Load cores in sorted order and tweak coreLoadThread counts to improve cluster > stability on restarts > --- > > Key: SOLR-7280 > URL: https://issues.apache.org/jira/browse/SOLR-7280 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul > Fix For: 6.2, 5.5.3 > > Attachments: SOLR-7280-5x.patch, SOLR-7280-5x.patch, > SOLR-7280-5x.patch, SOLR-7280-test.patch, SOLR-7280.patch, SOLR-7280.patch > > > In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order > and tweaking some of the coreLoadThread counts, he was able to improve the > stability of a cluster with thousands of collections. We should explore some > of these changes and fold them into Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5998 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5998/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient] Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient] at __randomizedtesting.SeedInfo.seed([104CA7AAB781DBFC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test Error Message: expected: but was: Stack Trace: java.lang.AssertionError: expected: but was: at __randomizedtesting.SeedInfo.seed([104CA7AAB781DBFC:98189870197DB604]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:208) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389705#comment-15389705 ] David Smiley commented on LUCENE-7391: -- The test is fine; I thought the perf fix would also be in the patch. Let me know when you have that and I'll review and commit later. Also... your .patch appears to _not_ be a normal patch file see https://wiki.apache.org/lucene-java/HowToContribute#Creating_a_patch > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason >Assignee: David Smiley > Attachments: LUCENE-7391-test.patch, LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389577#comment-15389577 ] Martijn van Groningen commented on LUCENE-7391: --- The reason it filters out field with {{numTokens <= 0}} is that it would otherwise include non indexed fields (fields with just doc values or point values). However this slowdown is unintended. Maybe instead we could build `filteredFields` in the constructor of `MemoryIndexReader` and reuse it between `#fields()` invocations? > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7390) Let BKDWriter use temp heap for sorting points in proportion to IndexWriter's indexing buffer
[ https://issues.apache.org/jira/browse/LUCENE-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389727#comment-15389727 ] Robert Muir commented on LUCENE-7390: - +1 I have a little concern about this being fairly sizeable amount of ram, but i dont know if its worth the effort to e.g. compute this somewhere else, reserve the space away, pass thru to PointValuesWriter, increase the default rambuffer (else we reserve the whole thing by default), and so on. Seems messy no matter how I look at it. It is a little annoying that performance is so sensitive to this change, we should look into that more somehow. Maybe we can improve it so it does not need so much RAM. > Let BKDWriter use temp heap for sorting points in proportion to IndexWriter's > indexing buffer > - > > Key: LUCENE-7390 > URL: https://issues.apache.org/jira/browse/LUCENE-7390 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.2 > > Attachments: LUCENE-7390.patch > > > With Lucene's default codec, when writing dimensional points, we only give > {{BKDWriter}} 16 MB heap to use for sorting, regardless of how large IW's > indexing buffer is. A custom codec can change this but that's a little steep. > I've been testing indexing performance on a points-heavy dataset, 1.2 billion > taxi rides from http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml > , indexing with a 1 GB IW buffer, and the small 16 MB heap limit causes clear > performance problems because flushing the large segments forces {{BKDwriter}} > to switch to offline sorting which causes the DWPTs take too long to flush. > They then fall behind, and Lucene does a hard stall on incoming indexing > threads until they catch up. > [~rcmuir] had a simple idea to let IW pass the allowed temp heap usage to > {{PointsWriter.writeField}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17331 - Failure!
This is caused by this additional failure (build is not only unstable, it failed): [ecj-lint] 11. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/SQLHandler.java (at line 58) [ecj-lint] import org.apache.solr.common.cloud.DocCollection; [ecj-lint]^^ [ecj-lint] The import org.apache.solr.common.cloud.DocCollection is never used Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de > -Original Message- > From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] > Sent: Friday, July 22, 2016 3:40 PM > To: dev@lucene.apache.org > Subject: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # > 17331 - Failure! > > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17331/ > Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC > > 1 tests failed. > FAILED: > org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitFor > NonexistantCollection > > Error Message: > waitForState was not triggered by collection creation > > Stack Trace: > java.lang.AssertionError: waitForState was not triggered by collection > creation > at > __randomizedtesting.SeedInfo.seed([EEB57637C0F1DF11:4595ED81ADAF1B3 > A]:0) > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at > org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitFor > NonexistantCollection(TestCollectionStateWatchers.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja > va:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess > orImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize > dRunner.java:1764) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando > mizedRunner.java:871) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(Rando > mizedRunner.java:907) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(Rand > omizedRunner.java:921) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.e > valuate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule > SetupTeardownChained.java:49) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAf > terRule.java:45) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThr > eadAndTestName.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleI > gnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure. > java:47) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.r > un(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask > (ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL > eakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran > domizedRunner.java:880) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando > mizedRunner.java:781) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando > mizedRunner.java:816) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando > mizedRunner.java:827) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.e > valuate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAf > terRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCla > ssName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at >
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389651#comment-15389651 ] Steve Mason commented on LUCENE-7391: - Sorry our posts must have crossed OK I'll try that as well - it's very much like the "view" option I think > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
[ https://issues.apache.org/jira/browse/SOLR-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Armando Orlando updated SOLR-9329: -- Description: We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not seem to work as expected. We would like to replicate index on slaves only after optimize but what I noticed is that if I restarted solr master it lost the info related to last replicable index and calling /replication?command=invexversion is getting the last committed index not the last optimized one. If I leave it running, after first optimize command happens it works as expected and command=invexversion gives me last optimized index. We're running it as docker container. This is the requestHandler section we're using in both master and slaves: {code} ${solr.master.enable:false} optimize optimize ${solr.numberOfVersionToKeep:3} ${solr.slave.enable:false} ${solr.master.url:}/replication ${solr.replication.pollInterval:00:00:30} {code} was: We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not seem to work as expected. We would like to replicate index on slaves only after optimize but what I noticed is that if I restarted solr master it lost the info related to last replicable index and calling /replication?command=invexversion is getting the last committed index not the last optimized one. If I leave it running, after first optimize command happen it works as expected and command=invexversion give me last optimized index. We're running it as docker container. This is the requestHandler section we're using in both master and slaves: {code} ${solr.master.enable:false} optimize optimize ${solr.numberOfVersionToKeep:3} ${solr.slave.enable:false} ${solr.master.url:}/replication ${solr.replication.pollInterval:00:00:30} {code} > [SOLR][5.5.1] ReplicateAfter optimize is not working > > > Key: SOLR-9329 > URL: https://issues.apache.org/jira/browse/SOLR-9329 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.1 >Reporter: Armando Orlando > > We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not > seem to work as expected. We would like to replicate index on slaves only > after optimize but what I noticed is that if I restarted solr master it lost > the info related to last replicable index and calling > /replication?command=invexversion is getting the last committed index not the > last optimized one. > If I leave it running, after first optimize command happens it works as > expected and command=invexversion gives me last optimized index. > We're running it as docker container. > This is the requestHandler section we're using in both master and slaves: > {code} > > > ${solr.master.enable:false} > optimize > optimize > > ${solr.numberOfVersionToKeep:3} > > ${solr.slave.enable:false} > ${solr.master.url:}/replication > name="pollInterval">${solr.replication.pollInterval:00:00:30} > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7381) Add new RangeField
[ https://issues.apache.org/jira/browse/LUCENE-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389548#comment-15389548 ] Steve Rowe commented on LUCENE-7381: Another reproducing nightly branch_6x failing test seed found by my Jenkins: {noformat} [junit4] Suite: org.apache.lucene.search.TestDoubleRangeFieldQueries [junit4] 2> NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene direct ory. [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestDoubleRangeFieldQueries -Dtests.method=testMultiValued -Dtests.seed=49E2D2BA84C4DB27 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt -Dtests.locale=de-CH -Dtests.timezone=America/Costa_Rica -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 5.22s J5 | TestDoubleRangeFieldQueries.testMultiValued <<< [junit4]> Throwable #1: java.lang.AssertionError: wrong hit (first of possibly more): [junit4]> FAIL: id=18102 should not match but did [junit4]> queryBox=Box(Infinity TO Infinity) [junit4]> box=Box(Infinity TO Infinity) [junit4]> queryType=CONTAINS [junit4]> deleted?=false [junit4]>at __randomizedtesting.SeedInfo.seed([49E2D2BA84C4DB27:9DC2B6884A069B6F]:0) [junit4]>at org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:278) [junit4]>at org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:154) [junit4]>at org.apache.lucene.search.BaseRangeFieldQueryTestCase.testMultiValued(BaseRangeFieldQueryTestCase.java:73) [junit4]>at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/jobs/Lucene-Solr-Nightly-6.x/workspace/lucene/build/sandbox/test/J5/temp/lucene.search.TestDoubleRangeFieldQueries_49E2D2BA84C4DB27-001 [junit4] 2> NOTE: test params are: codec=Asserting(Lucene62): {id=PostingsFormat(name=LuceneFixedGap)}, docValues:{id=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=735, maxMBSortInHeap=7.540860061709994, sim=ClassicSimilarity, locale=de-CH, timezone=America/Costa_Rica [junit4] 2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 1.8.0_77 (64-bit)/cpus=16,threads=1,free=210135672,total=520093696 [junit4] 2> NOTE: All tests run in this JVM: [TestLatLonDocValuesField, FuzzyLikeThisQueryTest, TestSlowFuzzyQuery, TestDocValuesRangeQuery, TestDoubleRangeFieldQueries] [junit4] Completed [19/20 (1!)] on J5 in 61.33s, 5 tests, 1 failure <<< FAILURES! {noformat} > Add new RangeField > -- > > Key: LUCENE-7381 > URL: https://issues.apache.org/jira/browse/LUCENE-7381 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Nicholas Knize > Attachments: LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch, > LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch > > > I've been tinkering with a new Point-based {{RangeField}} for indexing > numeric ranges that could be useful for a number of applications. > For example, a single dimension represents a span along a single axis such as > indexing calendar entries start and end time, 2d range could represent > bounding boxes for geometric applications (e.g., supporting Point based geo > shapes), 3d ranges bounding cubes for 3d geometric applications (collision > detection, 3d geospatial), and 4d ranges for space time applications. I'm > sure there's applicability for 5d+ ranges but a first incarnation should > likely limit for performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
Armando Orlando created SOLR-9329: - Summary: [SOLR][5.5.1] ReplicateAfter optimize is not working Key: SOLR-9329 URL: https://issues.apache.org/jira/browse/SOLR-9329 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 5.5.1 Reporter: Armando Orlando We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not seem to work as expected. We would like to replicate index on slaves only after optimize but what I noticed is that if I restarted solr master it lost the info related to last replicable index and calling /replication?command=invexversion is getting the last committed index not the last optimized one. If I leave it running, after first optimize command happen it works as expected and command=invexversion give me last optimized index. We're running it as docker container. This is the requestHandler section we're using in both master and slaves: {code} ${solr.master.enable:false} optimize optimize ${solr.numberOfVersionToKeep:3} ${solr.slave.enable:false} ${solr.master.url:}/replication ${solr.replication.pollInterval:00:00:30} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9200) Add Delegation Token Support to Solr
[ https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated SOLR-9200: - Attachment: SOLR-9200.patch > Add Delegation Token Support to Solr > > > Key: SOLR-9200 > URL: https://issues.apache.org/jira/browse/SOLR-9200 > Project: Solr > Issue Type: New Feature > Components: security >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, > SOLR-9200.patch, SOLR-9200.patch > > > SOLR-7468 added support for kerberos authentication via the hadoop > authentication filter. Hadoop also has support for an authentication filter > that supports delegation tokens, which allow authenticated users the ability > to grab/renew/delete a token that can be used to bypass the normal > authentication path for a time. This is useful in a variety of use cases: > 1) distributed clients (e.g. MapReduce) where each client may not have access > to the user's kerberos credentials. Instead, the job runner can grab a > delegation token and use that during task execution. > 2) If the load on the kerberos server is too high, delegation tokens can > avoid hitting the kerberos server after the first request > 3) If requests/permissions need to be delegated to another user: the more > privileged user can request a delegation token that can be passed to the less > privileged user. > Note to self: > In > https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636 > I made the following comment which I need to investigate further, since I > don't know if anything changed in this area: > {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin > moving forward (I understand this is more a generic auth question than > kerberos specific). For example, in the latest version of the filter we are > using at Cloudera, we play around with the ServletContext in order to pass > information around > (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106). > Is there any way we can get the actual ServletContext in a plugin?{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr
[ https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389591#comment-15389591 ] Gregory Chanan commented on SOLR-9200: -- removed some unused imports. > Add Delegation Token Support to Solr > > > Key: SOLR-9200 > URL: https://issues.apache.org/jira/browse/SOLR-9200 > Project: Solr > Issue Type: New Feature > Components: security >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, > SOLR-9200.patch, SOLR-9200.patch > > > SOLR-7468 added support for kerberos authentication via the hadoop > authentication filter. Hadoop also has support for an authentication filter > that supports delegation tokens, which allow authenticated users the ability > to grab/renew/delete a token that can be used to bypass the normal > authentication path for a time. This is useful in a variety of use cases: > 1) distributed clients (e.g. MapReduce) where each client may not have access > to the user's kerberos credentials. Instead, the job runner can grab a > delegation token and use that during task execution. > 2) If the load on the kerberos server is too high, delegation tokens can > avoid hitting the kerberos server after the first request > 3) If requests/permissions need to be delegated to another user: the more > privileged user can request a delegation token that can be passed to the less > privileged user. > Note to self: > In > https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636 > I made the following comment which I need to investigate further, since I > don't know if anything changed in this area: > {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin > moving forward (I understand this is more a generic auth question than > kerberos specific). For example, in the latest version of the filter we are > using at Cloudera, we play around with the ServletContext in order to pass > information around > (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106). > Is there any way we can get the actual ServletContext in a plugin?{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned LUCENE-7391: Assignee: David Smiley > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason >Assignee: David Smiley > Attachments: LUCENE-7391-test.patch, LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-1545) add support for sort to MoreLikeThis
[ https://issues.apache.org/jira/browse/SOLR-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389529#comment-15389529 ] Chantal Ackermann edited comment on SOLR-1545 at 7/22/16 1:56 PM: -- Is this feature obsolete because there is now another way of sorting MLT results? Or is there just nobody interested enough to update the patch (if necessary)? Here is another code reference (also quite old) which includes boosting: https://github.com/dfdeshom/custom-mlt (Apache 2.0 License) (I'm actually more interested in boosting MLT results but right now I'm following any track I can find.) was (Author: chantal): Is this feature obsolete because there is now another way of sorting MLT results? Or is there just nobody interested enough to work on it? If the latter is the case - would this code: https://github.com/dfdeshom/custom-mlt (Apache 2.0 License) be of any help (not my code)? I'm actually more interested in boosting MLT results but right now I'm following any track I can find. > add support for sort to MoreLikeThis > > > Key: SOLR-1545 > URL: https://issues.apache.org/jira/browse/SOLR-1545 > Project: Solr > Issue Type: Improvement > Components: search >Affects Versions: 1.4 >Reporter: Bill Au >Priority: Minor > Fix For: 4.9, 6.0 > > Attachments: solr-1545-1.4.1.patch, solr-1545.patch > > > Add support for sort to MoreLikeThis. I will attach a patch with more info > shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7389) Validation issue in FieldType#setDimensions?
[ https://issues.apache.org/jira/browse/LUCENE-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389648#comment-15389648 ] Michael McCandless commented on LUCENE-7389: +1 Maybe remove that {{addDocument}} call in the test case, since we now throw the exc (correctly!) on trying to create the point? Thanks [~martijn.v.groningen]! > Validation issue in FieldType#setDimensions? > > > Key: LUCENE-7389 > URL: https://issues.apache.org/jira/browse/LUCENE-7389 > Project: Lucene - Core > Issue Type: Bug >Reporter: Martijn van Groningen > Attachments: LUCENE-7383.patch > > > It compares if the {{dimensionCount}} is larger than > {{PointValues.MAX_NUM_BYTES}} while this constant should be compared to > {{dimensionNumBytes}} instead? > So this if statement: > {noformat} > if (dimensionCount > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} > Should be: > {noformat} > if (dimensionNumBytes > PointValues.MAX_NUM_BYTES) { > throw new IllegalArgumentException("dimensionNumBytes must be <= " + > PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); > } > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Mason updated LUCENE-7391: Attachment: LUCENE-7391-test.patch Attaching a patch with a test for the current behaviour - my original patch causes this to fail when applied Suggestions for improvements appreciated (I'm _very_ new to this codebase) > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason > Attachments: LUCENE-7391-test.patch, LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389760#comment-15389760 ] ASF subversion and git services commented on SOLR-9323: --- Commit f70adac1abb04b654f052a047ebe3b85b3c59e67 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f70adac ] SOLR-9323: remove unused import (SQLHandler) > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Trivial > Fix For: 6.2 > > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17331 - Failure!
SQLHandler.java fix committed. - Original Message - From: dev@lucene.apache.org To: dev@lucene.apache.org At: 07/22/16 15:06:57 This is caused by this additional failure (build is not only unstable, it failed): [ecj-lint] 11. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/SQLHandler.java (at line 58) [ecj-lint] import org.apache.solr.common.cloud.DocCollection; [ecj-lint]^^ [ecj-lint] The import org.apache.solr.common.cloud.DocCollection is never used Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de > -Original Message- > From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de] > Sent: Friday, July 22, 2016 3:40 PM > To: dev@lucene.apache.org > Subject: [JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # > 17331 - Failure! > > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17331/ > Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC > > 1 tests failed. > FAILED: > org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitFor > NonexistantCollection > > Error Message: > waitForState was not triggered by collection creation > > Stack Trace: > java.lang.AssertionError: waitForState was not triggered by collection > creation > at > __randomizedtesting.SeedInfo.seed([EEB57637C0F1DF11:4595ED81ADAF1B3 > A]:0) > at org.junit.Assert.fail(Assert.java:93) > at org.junit.Assert.assertTrue(Assert.java:43) > at > org.apache.solr.common.cloud.TestCollectionStateWatchers.testCanWaitFor > NonexistantCollection(TestCollectionStateWatchers.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja > va:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess > orImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(Randomize > dRunner.java:1764) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(Rando > mizedRunner.java:871) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(Rando > mizedRunner.java:907) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(Rand > omizedRunner.java:921) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.e > valuate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRule > SetupTeardownChained.java:49) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAf > terRule.java:45) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThr > eadAndTestName.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleI > gnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure. > java:47) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.r > un(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask > (ThreadLeakControl.java:809) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadL > eakControl.java:460) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(Ran > domizedRunner.java:880) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(Rando > mizedRunner.java:781) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(Rando > mizedRunner.java:816) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Rando > mizedRunner.java:827) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.e > valuate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAf > terRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreCla > ssName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMet > hodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(State > mentAdapter.java:36) > at >
[jira] [Commented] (SOLR-4268) Admin UI - button to unload transient core without removing from solr.xml
[ https://issues.apache.org/jira/browse/SOLR-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389810#comment-15389810 ] Erick Erickson commented on SOLR-4268: -- Still an issue. The unload operation now removes the core.properties file associated with the core, and there's no way to get it back. We might want to retitle this to something like "revamp the core admin UI". Personally I'd be in favor of only displaying the core admin API when _NOT_ connected to Zookeeper, but that'd take some discussion as it has implications. To support anything like an "unload" command we'd need some work, things like 1> standardize renaming core.properties rather than removing it. Say to core.unloaded 2> get a list of all the potential cores with core.unloaded rather than core.properties so we could get it back 3> whatever. I suppose wrapped around all of this is whether we want a core admin page at all when we move to Zookeeper as "the one source of truth". > Admin UI - button to unload transient core without removing from solr.xml > - > > Key: SOLR-4268 > URL: https://issues.apache.org/jira/browse/SOLR-4268 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 4.0 >Reporter: Shawn Heisey > Fix For: 4.9, 6.0 > > > The core "unload" button in the UI currently will completely remove a core > from solr.xml. With the implentation of transient cores, there should be a > way to ask Solr to unload a core without removing it entirely. > This leads into a discussion about terminology. UNLOAD isn't a good > single-word description for what it does. A case could be made for having > REMOVE and DELETE actions for CoreAdmin, with confirmation prompts if you > click on those buttons in the UI. DELETE could simply be an option on REMOVE > - which I think you can actually currently do with UNLOAD. > Another idea, not sure if it needs its own issue or is part of this one: If a > core is mentioned in solr.xml but not actually loaded, it would be very cool > if it were listed, but with a different background color to indicate the > non-loaded state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5033) Started Time is Incorrect
[ https://issues.apache.org/jira/browse/SOLR-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389823#comment-15389823 ] jmlucjav commented on SOLR-5033: By 'this' I mean a mismatch between 'Indexing since' and 'Started' values. > Started Time is Incorrect > - > > Key: SOLR-5033 > URL: https://issues.apache.org/jira/browse/SOLR-5033 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.2 >Reporter: Mohammed ATMANE >Priority: Minor > > In dataimport page, I have : > Indexing since 8m 30s > Requests: 0 (0/s), Fetched: 0 (0/s), Skipped: 0, Processed: 0 (0/s) > Started: 6 minutes ago > Started Time must be greater than Indexing Time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389948#comment-15389948 ] Joel Bernstein commented on SOLR-9331: -- So the length that is passed into the constructor would be the exact length requested by query. The len being passed in getTopDocsCollector would be adjusted for the query result cache I believe. I'm not sure there is any issue though with using the len being passed in getTopDocsCollector. > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7381) Add new RangeField
[ https://issues.apache.org/jira/browse/LUCENE-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389769#comment-15389769 ] ASF subversion and git services commented on LUCENE-7381: - Commit 1a94c25a04b1de80f8ae6e9c35f60ff97e9ec190 in lucene-solr's branch refs/heads/branch_6x from [~nknize] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a94c25 ] LUCENE-7381: Fix equals relation in RangeFieldQuery. Fix relation logic in BaseRangeFieldQueryTestCase. > Add new RangeField > -- > > Key: LUCENE-7381 > URL: https://issues.apache.org/jira/browse/LUCENE-7381 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Nicholas Knize > Attachments: LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch, > LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch > > > I've been tinkering with a new Point-based {{RangeField}} for indexing > numeric ranges that could be useful for a number of applications. > For example, a single dimension represents a span along a single axis such as > indexing calendar entries start and end time, 2d range could represent > bounding boxes for geometric applications (e.g., supporting Point based geo > shapes), 3d ranges bounding cubes for 3d geometric applications (collision > detection, 3d geospatial), and 4d ranges for space time applications. I'm > sure there's applicability for 5d+ ranges but a first incarnation should > likely limit for performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5033) Started Time is Incorrect
[ https://issues.apache.org/jira/browse/SOLR-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389819#comment-15389819 ] jmlucjav commented on SOLR-5033: I don't have a dih at hand now, but I have seen this many many times (like yesterday, with up to date versions). I suspect this happens when SOLR_TIMEZONE and the client timezone are not the same? > Started Time is Incorrect > - > > Key: SOLR-5033 > URL: https://issues.apache.org/jira/browse/SOLR-5033 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.2 >Reporter: Mohammed ATMANE >Priority: Minor > > In dataimport page, I have : > Indexing since 8m 30s > Requests: 0 (0/s), Fetched: 0 (0/s), Skipped: 0, Processed: 0 (0/s) > Started: 6 minutes ago > Started Time must be greater than Indexing Time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9275) make XML Query Parser support extensible-via-configuration
[ https://issues.apache.org/jira/browse/SOLR-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-9275: -- Attachment: (was: SOLR-9331.patch) > make XML Query Parser support extensible-via-configuration > -- > > Key: SOLR-9275 > URL: https://issues.apache.org/jira/browse/SOLR-9275 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.6, 6.2, master (7.0) > > Attachments: SOLR-9275.patch > > > SOLR-839 added XML QueryParser support (deftype=xmlparser) and this ticket > here proposes to make that support extensible-via-configuration. > Objective: > * To support use of custom query builders. > * To support use of custom query builders _without_ a corresponding custom > XmlQParser plugin class. > Illustration: > * solrconfig.xml snippet to configure use of the custom builders > {code} > > org.apache.solr.search.HelloQueryBuilder > org.apache.solr.search.GoodbyeQueryBuilder > > {code} > * HelloQueryBuilder and GoodbyeQueryBuilder both extend the new abstract > SolrQueryBuilder class. > {code} > + public abstract class SolrQueryBuilder implements QueryBuilder { > + protected final SolrQueryRequest req; > + protected final QueryBuilder queryFactory; > + public SolrQueryBuilder(String defaultField, Analyzer analyzer, > + SolrQueryRequest req, QueryBuilder queryFactory) { > + this.req = req; > + this.queryFactory = queryFactory; > + } > + } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4268) Admin UI - button to unload transient core without removing from solr.xml
[ https://issues.apache.org/jira/browse/SOLR-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390027#comment-15390027 ] Upayavira commented on SOLR-4268: - [~erickerickson] I think you'll find that's already the case - the core admin tab doesn't appear if in cloud mode (on the new UI at least). Strictly, this ticket is about the core admin *API* not the UI specifically, as the UI doesn't have the ability to do anything that you are talking about. I'd suggest changing this to an API ticket rather than a UI one. > Admin UI - button to unload transient core without removing from solr.xml > - > > Key: SOLR-4268 > URL: https://issues.apache.org/jira/browse/SOLR-4268 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 4.0 >Reporter: Shawn Heisey > Fix For: 4.9, 6.0 > > > The core "unload" button in the UI currently will completely remove a core > from solr.xml. With the implentation of transient cores, there should be a > way to ask Solr to unload a core without removing it entirely. > This leads into a discussion about terminology. UNLOAD isn't a good > single-word description for what it does. A case could be made for having > REMOVE and DELETE actions for CoreAdmin, with confirmation prompts if you > click on those buttons in the UI. DELETE could simply be an option on REMOVE > - which I think you can actually currently do with UNLOAD. > Another idea, not sure if it needs its own issue or is part of this one: If a > core is mentioned in solr.xml but not actually loaded, it would be very cool > if it were listed, but with a different background color to indicate the > non-loaded state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7381) Add new RangeField
[ https://issues.apache.org/jira/browse/LUCENE-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389765#comment-15389765 ] ASF subversion and git services commented on LUCENE-7381: - Commit 7f1db8a047818da337b27fe9dce0824cb5a02b96 in lucene-solr's branch refs/heads/master from [~nknize] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7f1db8a ] LUCENE-7381: Fix equals relation in RangeFieldQuery. Fix relation logic in BaseRangeFieldQueryTestCase. > Add new RangeField > -- > > Key: LUCENE-7381 > URL: https://issues.apache.org/jira/browse/LUCENE-7381 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Nicholas Knize > Attachments: LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch, > LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch > > > I've been tinkering with a new Point-based {{RangeField}} for indexing > numeric ranges that could be useful for a number of applications. > For example, a single dimension represents a span along a single axis such as > indexing calendar entries start and end time, 2d range could represent > bounding boxes for geometric applications (e.g., supporting Point based geo > shapes), 3d ranges bounding cubes for 3d geometric applications (collision > detection, 3d geospatial), and 4d ranges for space time applications. I'm > sure there's applicability for 5d+ ranges but a first incarnation should > likely limit for performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9275) make XML Query Parser support extensible-via-configuration
[ https://issues.apache.org/jira/browse/SOLR-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-9275. --- Resolution: Fixed Fix Version/s: master (7.0) 6.2 5.6 > make XML Query Parser support extensible-via-configuration > -- > > Key: SOLR-9275 > URL: https://issues.apache.org/jira/browse/SOLR-9275 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.6, 6.2, master (7.0) > > Attachments: SOLR-9275.patch > > > SOLR-839 added XML QueryParser support (deftype=xmlparser) and this ticket > here proposes to make that support extensible-via-configuration. > Objective: > * To support use of custom query builders. > * To support use of custom query builders _without_ a corresponding custom > XmlQParser plugin class. > Illustration: > * solrconfig.xml snippet to configure use of the custom builders > {code} > > org.apache.solr.search.HelloQueryBuilder > org.apache.solr.search.GoodbyeQueryBuilder > > {code} > * HelloQueryBuilder and GoodbyeQueryBuilder both extend the new abstract > SolrQueryBuilder class. > {code} > + public abstract class SolrQueryBuilder implements QueryBuilder { > + protected final SolrQueryRequest req; > + protected final QueryBuilder queryFactory; > + public SolrQueryBuilder(String defaultField, Analyzer analyzer, > + SolrQueryRequest req, QueryBuilder queryFactory) { > + this.req = req; > + this.queryFactory = queryFactory; > + } > + } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1078 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1078/ 16 tests failed. FAILED: org.apache.lucene.index.TestIndexSorting.testRandom3 Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([A567C673CB971C20]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestIndexSorting Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([A567C673CB971C20]:0) FAILED: org.apache.lucene.search.TestDoubleRangeFieldQueries.testMultiValued Error Message: wrong hit (first of possibly more): FAIL: id=210 should not match but did queryBox=Box(Infinity TO Infinity) boxes=Box(Infinity TO Infinity), Box(-44.18428790713395 TO 41.01413103632787) queryType=CONTAINS deleted?=false Stack Trace: java.lang.AssertionError: wrong hit (first of possibly more): FAIL: id=210 should not match but did queryBox=Box(Infinity TO Infinity) boxes=Box(Infinity TO Infinity), Box(-44.18428790713395 TO 41.01413103632787) queryType=CONTAINS deleted?=false at __randomizedtesting.SeedInfo.seed([97F4EF9641B7D12B:43D48BA48F759163]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:278) at org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:154) at org.apache.lucene.search.BaseRangeFieldQueryTestCase.testMultiValued(BaseRangeFieldQueryTestCase.java:73) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2
[ https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389994#comment-15389994 ] ASF subversion and git services commented on SOLR-9076: --- Commit a6655a9d39cbfd0f8c85eceee00eab1f64d24023 in lucene-solr's branch refs/heads/branch_6x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a6655a9 ] SOLR-9076: disable broken nightly tests MorphlineBasicMiniMRTest and MorphlineGoLiveMiniMRTest via @AwaitsFix > Update to Hadoop 2.7.2 > -- > > Key: SOLR-9076 > URL: https://issues.apache.org/jira/browse/SOLR-9076 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9076-Fix-dependencies.patch, SOLR-9076-Hack.patch, > SOLR-9076-fixnetty.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, > SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2
[ https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389995#comment-15389995 ] ASF subversion and git services commented on SOLR-9076: --- Commit 85a585c51698edd823769a159856524407cf6456 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=85a585c ] SOLR-9076: disable broken nightly tests MorphlineBasicMiniMRTest and MorphlineGoLiveMiniMRTest via @AwaitsFix > Update to Hadoop 2.7.2 > -- > > Key: SOLR-9076 > URL: https://issues.apache.org/jira/browse/SOLR-9076 > Project: Solr > Issue Type: Improvement >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9076-Fix-dependencies.patch, SOLR-9076-Hack.patch, > SOLR-9076-fixnetty.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, > SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-5033) Started Time is Incorrect
[ https://issues.apache.org/jira/browse/SOLR-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson closed SOLR-5033. Resolution: Won't Fix > Started Time is Incorrect > - > > Key: SOLR-5033 > URL: https://issues.apache.org/jira/browse/SOLR-5033 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.2 >Reporter: Mohammed ATMANE >Priority: Minor > > In dataimport page, I have : > Indexing since 8m 30s > Requests: 0 (0/s), Fetched: 0 (0/s), Skipped: 0, Processed: 0 (0/s) > Started: 6 minutes ago > Started Time must be greater than Indexing Time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7390) Let BKDWriter use temp heap for sorting points in proportion to IndexWriter's indexing buffer
[ https://issues.apache.org/jira/browse/LUCENE-7390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389815#comment-15389815 ] Michael McCandless commented on LUCENE-7390: bq. I have a little concern about this being fairly sizeable amount of ram Yeah I agree... But, with this change, we allow each flushing segment to use up to 1/8th of IW's buffer, or 16 MB, whichever is larger, in temp space. Remember that this is transient usage: after that sort and the points are written, it's freed. It's not unlike how merging uses temp space to map around deleted doc IDs, or in-flight flushing segments tie up temp space until they finish writing. I think IW has a right to use temp space beyond the "long term" indexing buffer ... I'll try to improve IWC's javadocs here, explaining that this is not a hard limit. bq. It is a little annoying that performance is so sensitive to this change, we should look into that more somehow. Maybe we can improve it so it does not need so much RAM. I already made quite a few optimizations here, but I agree we could do more, e.g. don't always do a secret {{forceMerge}} in {{OfflineSorter}} (LUCENE-7141), but that got sort of complicated when I last tried... I think the discontinuity, moving from a single in-heap sort, to "serialize to disk", "read 2 partitions and sort those in heap", "write those partitions to disk", "do a final merge sort of those 2 partitions to another file", is the big hit, and I agree it would be great to find a way to reduce that cost. > Let BKDWriter use temp heap for sorting points in proportion to IndexWriter's > indexing buffer > - > > Key: LUCENE-7390 > URL: https://issues.apache.org/jira/browse/LUCENE-7390 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: master (7.0), 6.2 > > Attachments: LUCENE-7390.patch > > > With Lucene's default codec, when writing dimensional points, we only give > {{BKDWriter}} 16 MB heap to use for sorting, regardless of how large IW's > indexing buffer is. A custom codec can change this but that's a little steep. > I've been testing indexing performance on a points-heavy dataset, 1.2 billion > taxi rides from http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml > , indexing with a 1 GB IW buffer, and the small 16 MB heap limit causes clear > performance problems because flushing the large segments forces {{BKDwriter}} > to switch to offline sorting which causes the DWPTs take too long to flush. > They then fall behind, and Lucene does a hard stall on incoming indexing > threads until they catch up. > [~rcmuir] had a simple idea to let IW pass the allowed temp heap usage to > {{PointsWriter.writeField}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9330) Race condition between core reload and statistics request
Andrey Kudryavtsev created SOLR-9330: Summary: Race condition between core reload and statistics request Key: SOLR-9330 URL: https://issues.apache.org/jira/browse/SOLR-9330 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 5.5 Reporter: Andrey Kudryavtsev Things happened that we execute this two requests consequentially in Solr 5.5: * Core reload: /admin/cores?action=RELOAD=_coreName_ * Check core statistics: /_coreName_/admin/mbeans?stats=true And sometimes second request ends with this error: {code} ERROR org.apache.solr.servlet.HttpSolrCall - null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) at org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) {code} If my understanding of SolrCore internals is correct, it happens because of async nature of reload request: * New searcher is "registered" in separate thread * Old searcher is closed in same separate thread and only after new one is registered * When old searcher is closing, it removes itself from map with MBeans * If statistic requests happens before old searcher is completely removed from everywhere - exception can happen. What do you think if we will introduce new parameter for reload request which make it fully synchronized? Basically it will force it to call {code} SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
[ https://issues.apache.org/jira/browse/SOLR-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389864#comment-15389864 ] Armando Orlando commented on SOLR-9329: --- Steps to reproduce: # Solr starts with an optimized index # Indexing job starts # Solr change index version after each commit # Master (Replicable) does not show anything (checking on Solr console) # Calling command=indexversion is getting last committed index version not last optimized one So the same I was experiencing with 5.5.1. Basically after restart solr looks like is working like I specified ReplicateAfter=commit not optimize like I really specified. Thanks, Armando. > [SOLR][5.5.1] ReplicateAfter optimize is not working > > > Key: SOLR-9329 > URL: https://issues.apache.org/jira/browse/SOLR-9329 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.1 >Reporter: Armando Orlando > > We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not > seem to work as expected. We would like to replicate index on slaves only > after optimize but what I noticed is that if I restarted solr master it lost > the info related to last replicable index and calling > /replication?command=invexversion is getting the last committed index not the > last optimized one. > If I leave it running, after first optimize command happens it works as > expected and command=invexversion gives me last optimized index. > We're running it as docker container. > This is the requestHandler section we're using in both master and slaves: > {code} > > > ${solr.master.enable:false} > optimize > optimize > > ${solr.numberOfVersionToKeep:3} > > ${solr.slave.enable:false} > ${solr.master.url:}/replication > name="pollInterval">${solr.replication.pollInterval:00:00:30} > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389932#comment-15389932 ] Joel Bernstein edited comment on SOLR-9331 at 7/22/16 6:01 PM: --- It's been a while since I looked at this code. I'm wondering if I originally implemented it like this because of issues with the QueryResultCache. But I don't remember exactly the reason for having a separate length variable. was (Author: joel.bernstein): It's been a while since I looked at this code. I'm wondering if I originally implemented like this because of issues with the QueryResultCache. But I don't remember exactly the reason for having a separate length variable. > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389932#comment-15389932 ] Joel Bernstein commented on SOLR-9331: -- It's been a while since I looked at this code. I'm wondering if I originally implemented like this because of issues with the QueryResultCache. But I don't remember exactly the reason for having a separate length variable. > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389953#comment-15389953 ] Joel Bernstein edited comment on SOLR-9331 at 7/22/16 6:08 PM: --- Also let's take close at all the adjustments done to the length in SolrIndexSearcher.getDocListNC(QueryResult qr, QueryCommand cmd). was (Author: joel.bernstein): Also let's take close at all the adjustments done to the length in getDocListNC(QueryResult qr, QueryCommand cmd). > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389762#comment-15389762 ] ASF subversion and git services commented on SOLR-9323: --- Commit 7a4f800388298f428d926c8bab36fae6a745c040 in lucene-solr's branch refs/heads/branch_6x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a4f800 ] SOLR-9323: remove unused import (SQLHandler) > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Trivial > Fix For: 6.2 > > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-2086) analysis.jsp should honor maxFieldLength setting
[ https://issues.apache.org/jira/browse/SOLR-2086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson closed SOLR-2086. Resolution: Won't Fix > analysis.jsp should honor maxFieldLength setting > > > Key: SOLR-2086 > URL: https://issues.apache.org/jira/browse/SOLR-2086 > Project: Solr > Issue Type: Improvement > Components: web gui >Affects Versions: 1.4.1 >Reporter: Eric Pugh > Attachments: SOLR-2086.patch > > > The analysis.jsp ignores the maxFieldLength setting when analyzing. I passed > in a block of text that was 102524 tokens, and it analyzed all of them, even > though maxFieldLength was 1. The difference in results is pretty drastic > between analysis.jsp and adding a document directly. > Also, the GUI pretty much melts down with lots and lots of tokens as well, so > maxFieldLength helps here as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9330) Race condition between core reload and statistics request
[ https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389833#comment-15389833 ] Joel Bernstein edited comment on SOLR-9330 at 7/22/16 5:04 PM: --- We are currently looking at exactly this issue with Alfresco's Solr integration. So we'll be happy to help find the right fix for this. was (Author: joel.bernstein): We are currently looking at exactly this issue at Alfresco. So we'll be happy to help find the right fix for this. > Race condition between core reload and statistics request > - > > Key: SOLR-9330 > URL: https://issues.apache.org/jira/browse/SOLR-9330 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5 >Reporter: Andrey Kudryavtsev > > Things happened that we execute this two requests consequentially in Solr 5.5: > * Core reload: /admin/cores?action=RELOAD=_coreName_ > * Check core statistics: /_coreName_/admin/mbeans?stats=true > And sometimes second request ends with this error: > {code} > ERROR org.apache.solr.servlet.HttpSolrCall - > null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is > closed > at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) > at > org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) > at > org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) > at > org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) > at > org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) > {code} > If my understanding of SolrCore internals is correct, it happens because of > async nature of reload request: > * New searcher is "registered" in separate thread > * Old searcher is closed in same separate thread and only after new one is > registered > * When old searcher is closing, it removes itself from map with MBeans > * If statistic requests happens before old searcher is completely removed > from everywhere - exception can happen. > What do you think if we will introduce new parameter for reload request which > make it fully synchronized? Basically it will force it to call {code} > SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] > waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9330) Race condition between core reload and statistics request
[ https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389833#comment-15389833 ] Joel Bernstein commented on SOLR-9330: -- We are currently looking at exactly this issue at Alfresco. So we'll be happy to help find the right fix for this. > Race condition between core reload and statistics request > - > > Key: SOLR-9330 > URL: https://issues.apache.org/jira/browse/SOLR-9330 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5 >Reporter: Andrey Kudryavtsev > > Things happened that we execute this two requests consequentially in Solr 5.5: > * Core reload: /admin/cores?action=RELOAD=_coreName_ > * Check core statistics: /_coreName_/admin/mbeans?stats=true > And sometimes second request ends with this error: > {code} > ERROR org.apache.solr.servlet.HttpSolrCall - > null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is > closed > at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) > at > org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) > at > org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) > at > org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) > at > org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) > at > org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) > {code} > If my understanding of SolrCore internals is correct, it happens because of > async nature of reload request: > * New searcher is "registered" in separate thread > * Old searcher is closed in same separate thread and only after new one is > registered > * When old searcher is closing, it removes itself from map with MBeans > * If statistic requests happens before old searcher is completely removed > from everywhere - exception can happen. > What do you think if we will introduce new parameter for reload request which > make it fully synchronized? Basically it will force it to call {code} > SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] > waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 1238 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1238/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 65505 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /tmp/ecj753453510 [ecj-lint] Compiling 940 source files to /tmp/ecj753453510 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 5. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 6. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 7. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint] ^^ [ecj-lint] Resource leak: 'dir' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java (at line 120) [ecj-lint] reader = cfiltfac.create(reader); [ecj-lint] [ecj-lint] Resource leak: 'reader' is not closed at this location [ecj-lint] -- [ecj-lint] 9. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java (at line 144) [ecj-lint] return namedList; [ecj-lint] ^ [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location [ecj-lint] -- [ecj-lint] -- [ecj-lint] 10. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java (at line 1220) [ecj-lint] DirectoryReader reader = s==null ? null : s.get().getIndexReader(); [ecj-lint] ^^ [ecj-lint] Resource leak: 'reader' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/handler/SQLHandler.java (at line 58) [ecj-lint] import org.apache.solr.common.cloud.DocCollection; [ecj-lint]^^ [ecj-lint] The import org.apache.solr.common.cloud.DocCollection is never used [ecj-lint] -- [ecj-lint] 12. WARNING in
[jira] [Comment Edited] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
[ https://issues.apache.org/jira/browse/SOLR-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389846#comment-15389846 ] Armando Orlando edited comment on SOLR-9329 at 7/22/16 5:14 PM: I'm currently testing solr 6.1 but it seems nothing changed there. Today or tomorrow I'll try version 5.5.2. Thanks, Armando. was (Author: arorlando): I'm currently testing solr 6.1 but it seems nothing changes there. Today or tomorrow I'll try version 5.5.2. Thanks, Armando. > [SOLR][5.5.1] ReplicateAfter optimize is not working > > > Key: SOLR-9329 > URL: https://issues.apache.org/jira/browse/SOLR-9329 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.1 >Reporter: Armando Orlando > > We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not > seem to work as expected. We would like to replicate index on slaves only > after optimize but what I noticed is that if I restarted solr master it lost > the info related to last replicable index and calling > /replication?command=invexversion is getting the last committed index not the > last optimized one. > If I leave it running, after first optimize command happens it works as > expected and command=invexversion gives me last optimized index. > We're running it as docker container. > This is the requestHandler section we're using in both master and slaves: > {code} > > > ${solr.master.enable:false} > optimize > optimize > > ${solr.numberOfVersionToKeep:3} > > ${solr.slave.enable:false} > ${solr.master.url:}/replication > name="pollInterval">${solr.replication.pollInterval:00:00:30} > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
[ https://issues.apache.org/jira/browse/SOLR-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389846#comment-15389846 ] Armando Orlando commented on SOLR-9329: --- I'm currently testing solr 6.1 but it seems nothing changes there. Today or tomorrow I'll try version 5.5.2. Thanks, Armando. > [SOLR][5.5.1] ReplicateAfter optimize is not working > > > Key: SOLR-9329 > URL: https://issues.apache.org/jira/browse/SOLR-9329 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.1 >Reporter: Armando Orlando > > We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not > seem to work as expected. We would like to replicate index on slaves only > after optimize but what I noticed is that if I restarted solr master it lost > the info related to last replicable index and calling > /replication?command=invexversion is getting the last committed index not the > last optimized one. > If I leave it running, after first optimize command happens it works as > expected and command=invexversion gives me last optimized index. > We're running it as docker container. > This is the requestHandler section we're using in both master and slaves: > {code} > > > ${solr.master.enable:false} > optimize > optimize > > ${solr.numberOfVersionToKeep:3} > > ${solr.slave.enable:false} > ${solr.master.url:}/replication > name="pollInterval">${solr.replication.pollInterval:00:00:30} > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9275) make XML Query Parser support extensible-via-configuration
[ https://issues.apache.org/jira/browse/SOLR-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-9275: -- Attachment: SOLR-9331.patch > make XML Query Parser support extensible-via-configuration > -- > > Key: SOLR-9275 > URL: https://issues.apache.org/jira/browse/SOLR-9275 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 5.6, 6.2, master (7.0) > > Attachments: SOLR-9275.patch, SOLR-9331.patch > > > SOLR-839 added XML QueryParser support (deftype=xmlparser) and this ticket > here proposes to make that support extensible-via-configuration. > Objective: > * To support use of custom query builders. > * To support use of custom query builders _without_ a corresponding custom > XmlQParser plugin class. > Illustration: > * solrconfig.xml snippet to configure use of the custom builders > {code} > > org.apache.solr.search.HelloQueryBuilder > org.apache.solr.search.GoodbyeQueryBuilder > > {code} > * HelloQueryBuilder and GoodbyeQueryBuilder both extend the new abstract > SolrQueryBuilder class. > {code} > + public abstract class SolrQueryBuilder implements QueryBuilder { > + protected final SolrQueryRequest req; > + protected final QueryBuilder queryFactory; > + public SolrQueryBuilder(String defaultField, Analyzer analyzer, > + SolrQueryRequest req, QueryBuilder queryFactory) { > + this.req = req; > + this.queryFactory = queryFactory; > + } > + } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
Christine Poerschke created SOLR-9331: - Summary: Can we remove ReRankQuery's length constructor argument? Key: SOLR-9331 URL: https://issues.apache.org/jira/browse/SOLR-9331 Project: Solr Issue Type: Wish Security Level: Public (Default Security Level. Issues are Public) Reporter: Christine Poerschke Priority: Minor Can we remove ReRankQuery's length constructor argument? It is a ReRankQParserPlugin private class. proposed patch summary: * change ReRankQuery.getTopDocsCollector to use its len argument (instead of the length member) * remove ReRankQuery's length member and constructor argument * remove ReRankQParser.parse's use of the rows and start parameters motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+127) - Build # 17332 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17332/ Java: 32bit/jdk-9-ea+127 -client -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:36444/forceleader_test_collection_shard1_replica1] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:36444/forceleader_test_collection_shard1_replica1] at __randomizedtesting.SeedInfo.seed([213AB411F7F62423:C7AD80D1CE74DD42]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:131) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (LUCENE-7381) Add new RangeField
[ https://issues.apache.org/jira/browse/LUCENE-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389773#comment-15389773 ] Nicholas Knize commented on LUCENE-7381: Thanks [~steve_rowe]! I saw a nightly failure for the same thing. I pushed a fix. > Add new RangeField > -- > > Key: LUCENE-7381 > URL: https://issues.apache.org/jira/browse/LUCENE-7381 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Nicholas Knize > Attachments: LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch, > LUCENE-7381.patch, LUCENE-7381.patch, LUCENE-7381.patch > > > I've been tinkering with a new Point-based {{RangeField}} for indexing > numeric ranges that could be useful for a number of applications. > For example, a single dimension represents a span along a single axis such as > indexing calendar entries start and end time, 2d range could represent > bounding boxes for geometric applications (e.g., supporting Point based geo > shapes), 3d ranges bounding cubes for 3d geometric applications (collision > detection, 3d geospatial), and 4d ranges for space time applications. I'm > sure there's applicability for 5d+ ranges but a first incarnation should > likely limit for performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390012#comment-15390012 ] Alan Woodward commented on LUCENE-7391: --- bq. The concurrency aspect of the MemoryIndex is in my opinion a bit of a mess +1 - freeze() was a hack, and I've been meaning to open an issue to make things properly immutable for ages. > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason >Assignee: David Smiley > Attachments: LUCENE-7391-test.patch, LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7391) MemoryIndexReader.fields() performance regression
[ https://issues.apache.org/jira/browse/LUCENE-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389767#comment-15389767 ] Steve Mason commented on LUCENE-7391: - Yeah it's still a work-in-progress, I'm still working on the full patch. I'm actually on annual leave next week so the final patch might be a little while (though I'll add it if I get time) I'll follow those instructions for generating the patch - it's just the output of {{git format-patch}} right now - thanks for the advice > MemoryIndexReader.fields() performance regression > - > > Key: LUCENE-7391 > URL: https://issues.apache.org/jira/browse/LUCENE-7391 > Project: Lucene - Core > Issue Type: Bug >Reporter: Steve Mason >Assignee: David Smiley > Attachments: LUCENE-7391-test.patch, LUCENE-7391.patch > > > While upgrading our codebase from Lucene 4 to Lucene 6 we found a significant > performance regression - a 5x slowdown > On profiling the code, the method MemoryIndexReader.fields() shows up as one > of the hottest methods > Looking at the method, it just creates a copy of the inner {{fields}} Map > before passing it to {{MemoryFields}}. It does this so that it can filter out > fields with {{numTokens <= 0}}. > The simplest "fix" would be to just remove the copying of the map completely, > and pass {{fields}} directly to {{MemoryFields}}. It's simple and removes > any slowdown caused by this method. It does potentially change behaviour > though, but none of the unit tests seem to test that behaviour so I wonder > whether it's necessary (I looked at the original ticket LUCENE-7091 that > introduced this code, I can't find much in way of an explanation). I'm going > to attach a patch to this effect anyway and we can take things from there -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9329) [SOLR][5.5.1] ReplicateAfter optimize is not working
[ https://issues.apache.org/jira/browse/SOLR-9329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389771#comment-15389771 ] Erick Erickson commented on SOLR-9329: -- Did you try 5.5.2? See: https://issues.apache.org/jira/browse/SOLR-9036 > [SOLR][5.5.1] ReplicateAfter optimize is not working > > > Key: SOLR-9329 > URL: https://issues.apache.org/jira/browse/SOLR-9329 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.1 >Reporter: Armando Orlando > > We just upgraded Solr version from 3.6 to 5.5.1 but the replication does not > seem to work as expected. We would like to replicate index on slaves only > after optimize but what I noticed is that if I restarted solr master it lost > the info related to last replicable index and calling > /replication?command=invexversion is getting the last committed index not the > last optimized one. > If I leave it running, after first optimize command happens it works as > expected and command=invexversion gives me last optimized index. > We're running it as docker container. > This is the requestHandler section we're using in both master and slaves: > {code} > > > ${solr.master.enable:false} > optimize > optimize > > ${solr.numberOfVersionToKeep:3} > > ${solr.slave.enable:false} > ${solr.master.url:}/replication > name="pollInterval">${solr.replication.pollInterval:00:00:30} > > > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4460) When you have multiple collections, the cloud radial graph view seems to place them right on top of each other.
[ https://issues.apache.org/jira/browse/SOLR-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389783#comment-15389783 ] Erick Erickson commented on SOLR-4460: -- As far as keeping the radial graph, I was testing a situation where there were 1,600 replicas scattered about and it turns out that this is more useful than I thought in that situation as it shows whether any nodes are not green in a single picture. In the "graph" view, you have to scroll a really long time to get all this. That said the new UI has a drop-down in the graph view to show up/down/whatever nodes. That view still shows whole collections though which can make it unwieldy. Perhaps a related idea would be to only show the _replicas_ in the graph view that met certain criteria? And perhaps something similar in the radial view (i.e. similar options)? Kind of a tangent to the base issue of making the radial view more readable I admin. > When you have multiple collections, the cloud radial graph view seems to > place them right on top of each other. > --- > > Key: SOLR-4460 > URL: https://issues.apache.org/jira/browse/SOLR-4460 > Project: Solr > Issue Type: Bug > Components: web gui >Reporter: Mark Miller >Priority: Minor > Attachments: cloud-radial-onecollection.png, cloud.jpg > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-9331: -- Attachment: SOLR-9331.patch > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 335 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/335/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test Error Message: Error from server at http://127.0.0.1:54095/solr: The backup directory already exists: file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_A81206E6D6E0D8C3-001/tempDir-002/mytestbackup/ Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:54095/solr: The backup directory already exists: file:///C:/Users/jenkins/workspace/Lucene-Solr-6.x-Windows/solr/build/solr-core/test/J1/temp/solr.cloud.TestLocalFSCloudBackupRestore_A81206E6D6E0D8C3-001/tempDir-002/mytestbackup/ at __randomizedtesting.SeedInfo.seed([A81206E6D6E0D8C3:2046393C781CB53B]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:403) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:356) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1270) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:206) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at
[jira] [Commented] (SOLR-9331) Can we remove ReRankQuery's length constructor argument?
[ https://issues.apache.org/jira/browse/SOLR-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389953#comment-15389953 ] Joel Bernstein commented on SOLR-9331: -- Also let's take close at all the adjustments done to the length in getDocListNC(QueryResult qr, QueryCommand cmd). > Can we remove ReRankQuery's length constructor argument? > > > Key: SOLR-9331 > URL: https://issues.apache.org/jira/browse/SOLR-9331 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-9331.patch > > > Can we remove ReRankQuery's length constructor argument? It is a > ReRankQParserPlugin private class. > proposed patch summary: > * change ReRankQuery.getTopDocsCollector to use its len argument (instead of > the length member) > * remove ReRankQuery's length member and constructor argument > * remove ReRankQParser.parse's use of the rows and start parameters > motivation: towards ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) > sharing (more) code -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9332) ReRankCollector to (somehow) override the topDocsSize method
Christine Poerschke created SOLR-9332: - Summary: ReRankCollector to (somehow) override the topDocsSize method Key: SOLR-9332 URL: https://issues.apache.org/jira/browse/SOLR-9332 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Reporter: Christine Poerschke Assignee: Christine Poerschke Priority: Minor The base class method uses {{pq}} which is initialised to {{null}} by the deriving class (ReRankCollector). Context/Motivation for figuring out how to override the method is potential factoring out of an AbstractReRankCollector base class so that ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) can share code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7388) Add IntRangeField, FloatRangeField, LongRangeField
[ https://issues.apache.org/jira/browse/LUCENE-7388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-7388: --- Attachment: LUCENE-7388.patch Patch ready for review: * adds IntRangeField, FloatRangeField, and LongRangeField classes * fixes {{BaseRangeFieldQueryTestCase}} to use an abstract {{Range}} class that is implemented by the concrete test class * adds {{TestIntRangeFieldQueries}}, {{TestFloatRangeFieldQueries}}, and {{TestLongRangeFieldQueries}} concrete test classes * updates {{TestDoubleRangeFieldQueries}} to implement changes to {{BaseRangeFieldQueryTestCase}} > Add IntRangeField, FloatRangeField, LongRangeField > -- > > Key: LUCENE-7388 > URL: https://issues.apache.org/jira/browse/LUCENE-7388 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize > Attachments: LUCENE-7388.patch > > > This is the follow on to LUCENE-7381 for adding support for indexing and > querying on {{int}}, {{float}}, and {{long}} ranges. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 17334 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17334/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 4 object(s) that were not released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, TransactionLog, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 4 object(s) that were not released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, TransactionLog, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([8E0E9FEE312236]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'null' for path 'response/params/y/p' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":4, "params":{ "x":{ "a":"A val", "b":"B val", "_appends_":{"add":"first"}, "_invariants_":{"fixed":"f"}, "":{"v":1}}, "y":{ "p":"P val", "q":"Q val", "":{"v":2}, from server: http://127.0.0.1:38848/nxk/f/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'null' for path 'response/params/y/p' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":4, "params":{ "x":{ "a":"A val", "b":"B val", "_appends_":{"add":"first"}, "_invariants_":{"fixed":"f"}, "":{"v":1}}, "y":{ "p":"P val", "q":"Q val", "":{"v":2}, from server: http://127.0.0.1:38848/nxk/f/collection1 at __randomizedtesting.SeedInfo.seed([8E0E9FEE312236:88DA314540CD4FCE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:258) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
[JENKINS] Lucene-Solr-Tests-master - Build # 1283 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1283/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.overseer.ZkStateWriterTest: 1) Thread[id=4741, name=watches-672-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.overseer.ZkStateWriterTest: 1) Thread[id=4741, name=watches-672-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([248C7AB9AB81D5F7]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.overseer.ZkStateWriterTest Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=4741, name=watches-672-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=4741, name=watches-672-thread-1, state=TIMED_WAITING, group=TGRP-ZkStateWriterTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([248C7AB9AB81D5F7]:0) Build Log: [...truncated 11176 lines...] [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateWriterTest [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.overseer.ZkStateWriterTest_248C7AB9AB81D5F7-001/init-core-data-001 [junit4] 2> 650475 INFO (SUITE-ZkStateWriterTest-seed#[248C7AB9AB81D5F7]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 650487 INFO (TEST-ZkStateWriterTest.testSingleExternalCollection-seed#[248C7AB9AB81D5F7]) [ ] o.a.s.SolrTestCaseJ4 ###Starting testSingleExternalCollection [junit4] 2>
[jira] [Updated] (SOLR-9332) ReRankCollector to (somehow) override the topDocsSize method
[ https://issues.apache.org/jira/browse/SOLR-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-9332: -- Attachment: SOLR-9332.patch Partial patch only, don't know yet what the return value of the method should be. > ReRankCollector to (somehow) override the topDocsSize method > > > Key: SOLR-9332 > URL: https://issues.apache.org/jira/browse/SOLR-9332 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-9332.patch > > > The base class method uses {{pq}} which is initialised to {{null}} by the > deriving class (ReRankCollector). Context/Motivation for figuring out how to > override the method is potential factoring out of an AbstractReRankCollector > base class so that ReRankQParserPlugin and LTRQParserPlugin (SOLR-8542) can > share code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request
[ https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Kudryavtsev updated SOLR-9330: - Description: Things happened that we execute this two requests consecutively in Solr 5.5: * Core reload: /admin/cores?action=RELOAD=_coreName_ * Check core statistics: /_coreName_/admin/mbeans?stats=true And sometimes second request ends with this error: {code} ERROR org.apache.solr.servlet.HttpSolrCall - null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) at org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) {code} If my understanding of SolrCore internals is correct, it happens because of async nature of reload request: * New searcher is "registered" in separate thread * Old searcher is closed in same separate thread and only after new one is registered * When old searcher is closing, it removes itself from map with MBeans * If statistic requests happens before old searcher is completely removed from everywhere - exception can happen. What do you think if we will introduce new parameter for reload request which makes it fully synchronized? Basically it will force it to call {code} SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null was: Things happened that we execute this two requests consequentially in Solr 5.5: * Core reload: /admin/cores?action=RELOAD=_coreName_ * Check core statistics: /_coreName_/admin/mbeans?stats=true And sometimes second request ends with this error: {code} ERROR org.apache.solr.servlet.HttpSolrCall - null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) at org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) {code} If my understanding of SolrCore internals is correct, it happens because of async nature of reload request: * New searcher is "registered" in separate thread * Old searcher is closed in same separate thread and only after new one is registered * When old searcher is closing, it removes itself from map with MBeans * If statistic requests happens before old searcher is completely removed from everywhere - exception can happen. What do you think if we will introduce new parameter for reload request which makes it fully synchronized? Basically it will force it to call {code} SolrCore#getSearcher(boolean forceNew,
[jira] [Updated] (SOLR-9330) Race condition between core reload and statistics request
[ https://issues.apache.org/jira/browse/SOLR-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Kudryavtsev updated SOLR-9330: - Description: Things happened that we execute this two requests consequentially in Solr 5.5: * Core reload: /admin/cores?action=RELOAD=_coreName_ * Check core statistics: /_coreName_/admin/mbeans?stats=true And sometimes second request ends with this error: {code} ERROR org.apache.solr.servlet.HttpSolrCall - null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) at org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) {code} If my understanding of SolrCore internals is correct, it happens because of async nature of reload request: * New searcher is "registered" in separate thread * Old searcher is closed in same separate thread and only after new one is registered * When old searcher is closing, it removes itself from map with MBeans * If statistic requests happens before old searcher is completely removed from everywhere - exception can happen. What do you think if we will introduce new parameter for reload request which makes it fully synchronized? Basically it will force it to call {code} SolrCore#getSearcher(boolean forceNew, boolean returnSearcher, final Future[] waitSearcher, boolean updateHandlerReopens) {code} with waitSearcher!= null was: Things happened that we execute this two requests consequentially in Solr 5.5: * Core reload: /admin/cores?action=RELOAD=_coreName_ * Check core statistics: /_coreName_/admin/mbeans?stats=true And sometimes second request ends with this error: {code} ERROR org.apache.solr.servlet.HttpSolrCall - null:org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:274) at org.apache.lucene.index.StandardDirectoryReader.getVersion(StandardDirectoryReader.java:331) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.lucene.index.FilterDirectoryReader.getVersion(FilterDirectoryReader.java:119) at org.apache.solr.search.SolrIndexSearcher.getStatistics(SolrIndexSearcher.java:2404) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.addMBean(SolrInfoMBeanHandler.java:164) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:134) at org.apache.solr.handler.admin.SolrInfoMBeanHandler.handleRequestBody(SolrInfoMBeanHandler.java:65) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2082) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:670) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:458) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:183) {code} If my understanding of SolrCore internals is correct, it happens because of async nature of reload request: * New searcher is "registered" in separate thread * Old searcher is closed in same separate thread and only after new one is registered * When old searcher is closing, it removes itself from map with MBeans * If statistic requests happens before old searcher is completely removed from everywhere - exception can happen. What do you think if we will introduce new parameter for reload request which make it fully synchronized? Basically it will force it to call {code} SolrCore#getSearcher(boolean
[jira] [Created] (LUCENE-7392) Add point based GeoBoundingBoxField as a new RangeField type
Nicholas Knize created LUCENE-7392: -- Summary: Add point based GeoBoundingBoxField as a new RangeField type Key: LUCENE-7392 URL: https://issues.apache.org/jira/browse/LUCENE-7392 Project: Lucene - Core Issue Type: Improvement Reporter: Nicholas Knize This issue will add a new point based {{GeoBoundingBoxField}} type for indexing and querying 2D or 3D Geo bounding boxes. The intent is to construct this as a RangeField type and limit the first two dimensions to the lat/lon geospatial bounds (at 4 bytes each like {{LatLonPoint}}, while allowing an optional 8 byte ({{double}}) third dimension to serve as an altitude component for indexing 3D geospatial bounding boxes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 733 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/733/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch Error Message: Stack Trace: java.util.concurrent.TimeoutException at __randomizedtesting.SeedInfo.seed([69598012341D8BF5:34624F62731014CB]:0) at org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1233) at org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:630) at org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:92) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 13150 lines...] [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers [junit4] 2> Creating dataDir:
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5999 - Still unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5999/ Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasics Error Message: IOException occured when talking to server at: http://127.0.0.1:55345/solr/testSolrCloudCollection_shard1_replica2 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException occured when talking to server at: http://127.0.0.1:55345/solr/testSolrCloudCollection_shard1_replica2 at __randomizedtesting.SeedInfo.seed([CCC9A23068B7577F:F1110C1C5059090F]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:739) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1151) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1040) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:976) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17335 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17335/ Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch Error Message: CollectionStateWatcher wasn't cleared after completion Stack Trace: java.lang.AssertionError: CollectionStateWatcher wasn't cleared after completion at __randomizedtesting.SeedInfo.seed([9D9482B1DE59DC14:C0AF4DC19954432A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:117) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 13077 lines...] [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers [junit4] 2> Creating dataDir:
[jira] [Comment Edited] (SOLR-9241) Rebalance API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390486#comment-15390486 ] Nitin Sharma edited comment on SOLR-9241 at 7/23/16 3:03 AM: - I looked back some stats and found that I have run this upto close to 1T. 10 shards of 100 G each. We split that into 20 shards using merge based auto shard - Took around 1 hour but works reliably. Another index of size 2T with 50 shards (of 40 G each). We merged that into 10 shards using the smart merge strategy. That took around 10-15 mins. (Depending on machine type, ssd and network bandwidth) was (Author: nitin.sharma): I looked back some stats and found that I have run this upto close to 1T. 10 shards of 10 G each. We split that into 20 shards using merge based auto shard - Took around 1 hour but works reliably. Another index of size 2T with 50 shards (of 40 G each). We merged that into 10 shards using the smart merge strategy. That took around 10-15 mins. (Depending on machine type, ssd and network bandwidth) > Rebalance API for SolrCloud > --- > > Key: SOLR-9241 > URL: https://issues.apache.org/jira/browse/SOLR-9241 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Affects Versions: 6.1 > Environment: Ubuntu, Mac OsX >Reporter: Nitin Sharma > Labels: Cluster, SolrCloud > Fix For: 6.1 > > Attachments: Redistribute_After.jpeg, Redistribute_Before.jpeg, > Redistribute_call.jpeg, Replace_After.jpeg, Replace_Before.jpeg, > Replace_Call.jpeg, SOLR-9241-4.6.patch, SOLR-9241-6.1.patch > > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > This is the v1 of the patch for Solrcloud Rebalance api (as described in > http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at > Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API is to > provide a zero downtime mechanism to perform data manipulation and efficient > core allocation in solrcloud. This API was envisioned to be the base layer > that enables Solrcloud to be an auto scaling platform. (and work in unison > with other complementing monitoring and scaling features). > Patch Status: > === > The patch is work in progress and incremental. We have done a few rounds of > code clean up. We wanted to get the patch going first to get initial feed > back. We will continue to work on making it more open source friendly and > easily testable. > Deployment Status: > > The platform is deployed in production at bloomreach and has been battle > tested for large scale load. (millions of documents and hundreds of > collections). > Internals: > = > The internals of the API and performance : > http://engineering.bloomreach.com/solrcloud-rebalance-api/ > It is built on top of the admin collections API as an action (with various > flavors). At a high level, the rebalance api provides 2 constructs: > Scaling Strategy: Decides how to move the data. Every flavor has multiple > options which can be reviewed in the api spec. > Re-distribute - Move around data in the cluster based on capacity/allocation. > Auto Shard - Dynamically shard a collection to any size. > Smart Merge - Distributed Mode - Helps merging data from a larger shard setup > into smaller one. (the source should be divisible by destination) > Scale up - Add replicas on the fly > Scale Down - Remove replicas on the fly > Allocation Strategy: Decides where to put the data. (Nodes with least > cores, Nodes that do not have this collection etc). Custom implementations > can be built on top as well. One other example is Availability Zone aware. > Distribute data such that every replica is placed on different availability > zone to support HA. > Detailed API Spec: > > https://github.com/bloomreach/solrcloud-rebalance-api > Contributors: > = > Nitin Sharma > Suruchi Shah > Questions/Comments: > = > You can reach me at nitin.sha...@bloomreach.com -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9241) Rebalance API for SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-9241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390486#comment-15390486 ] Nitin Sharma commented on SOLR-9241: I looked back some stats and found that I have run this upto close to 1T. 10 shards of 10 G each. We split that into 20 shards using merge based auto shard - Took around 1 hour but works reliably. Another index of size 2T with 50 shards (of 40 G each). We merged that into 10 shards using the smart merge strategy. That took around 10-15 mins. (Depending on machine type, ssd and network bandwidth) > Rebalance API for SolrCloud > --- > > Key: SOLR-9241 > URL: https://issues.apache.org/jira/browse/SOLR-9241 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Affects Versions: 6.1 > Environment: Ubuntu, Mac OsX >Reporter: Nitin Sharma > Labels: Cluster, SolrCloud > Fix For: 6.1 > > Attachments: Redistribute_After.jpeg, Redistribute_Before.jpeg, > Redistribute_call.jpeg, Replace_After.jpeg, Replace_Before.jpeg, > Replace_Call.jpeg, SOLR-9241-4.6.patch, SOLR-9241-6.1.patch > > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > This is the v1 of the patch for Solrcloud Rebalance api (as described in > http://engineering.bloomreach.com/solrcloud-rebalance-api/) , built at > Bloomreach by Nitin Sharma and Suruchi Shah. The goal of the API is to > provide a zero downtime mechanism to perform data manipulation and efficient > core allocation in solrcloud. This API was envisioned to be the base layer > that enables Solrcloud to be an auto scaling platform. (and work in unison > with other complementing monitoring and scaling features). > Patch Status: > === > The patch is work in progress and incremental. We have done a few rounds of > code clean up. We wanted to get the patch going first to get initial feed > back. We will continue to work on making it more open source friendly and > easily testable. > Deployment Status: > > The platform is deployed in production at bloomreach and has been battle > tested for large scale load. (millions of documents and hundreds of > collections). > Internals: > = > The internals of the API and performance : > http://engineering.bloomreach.com/solrcloud-rebalance-api/ > It is built on top of the admin collections API as an action (with various > flavors). At a high level, the rebalance api provides 2 constructs: > Scaling Strategy: Decides how to move the data. Every flavor has multiple > options which can be reviewed in the api spec. > Re-distribute - Move around data in the cluster based on capacity/allocation. > Auto Shard - Dynamically shard a collection to any size. > Smart Merge - Distributed Mode - Helps merging data from a larger shard setup > into smaller one. (the source should be divisible by destination) > Scale up - Add replicas on the fly > Scale Down - Remove replicas on the fly > Allocation Strategy: Decides where to put the data. (Nodes with least > cores, Nodes that do not have this collection etc). Custom implementations > can be built on top as well. One other example is Availability Zone aware. > Distribute data such that every replica is placed on different availability > zone to support HA. > Detailed API Spec: > > https://github.com/bloomreach/solrcloud-rebalance-api > Contributors: > = > Nitin Sharma > Suruchi Shah > Questions/Comments: > = > You can reach me at nitin.sha...@bloomreach.com -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 336 - Still unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/336/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test Error Message: expected: but was: Stack Trace: java.lang.AssertionError: expected: but was: at __randomizedtesting.SeedInfo.seed([1338FE3921B1C325:9B6CC1E38F4DAEDD]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:208) at org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12218 lines...]
[jira] [Commented] (SOLR-9318) A DELETENODE command that should delete all replicas in that node
[ https://issues.apache.org/jira/browse/SOLR-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15390536#comment-15390536 ] Nitin Sharma commented on SOLR-9318: Refer to [SOLR-9320] for the patch for DELETENODE. > A DELETENODE command that should delete all replicas in that node > - > > Key: SOLR-9318 > URL: https://issues.apache.org/jira/browse/SOLR-9318 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul > Fix For: 6.1 > > > The command should look in all collections , find out replicas hosted in that > node and remove them -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9320) A REPLACENODE command to decommission an existing node with another new node
[ https://issues.apache.org/jira/browse/SOLR-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nitin Sharma updated SOLR-9320: --- Attachment: REPLACENODE_Before.jpeg REPLACENODE_call_response.jpeg REPLACENODE_After.jpeg DELETENODE.jpeg SOLR-9320.patch Patch for REPLACENODE & DELETENODE api as per spec. DELETENODE- Deletes all cores on the given node. REPLACENODE - Replaces all cores from a source node to a dest node and then calls DELETENODE on the source node. Attached screenshots against test cluster with api calls & before/after status of the cluster > A REPLACENODE command to decommission an existing node with another new node > > > Key: SOLR-9320 > URL: https://issues.apache.org/jira/browse/SOLR-9320 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul > Fix For: 6.1 > > Attachments: DELETENODE.jpeg, REPLACENODE_After.jpeg, > REPLACENODE_Before.jpeg, REPLACENODE_call_response.jpeg, SOLR-9320.patch > > > The command should accept a source node and target node. recreate the > replicas in source node in the target and do a DLETENODE of source node -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+127) - Build # 17337 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17337/ Java: 64bit/jdk-9-ea+127 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler Error Message: ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] at __randomizedtesting.SeedInfo.seed([AE245FBB8C776F79]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at jdk.internal.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(java.base@9-ea/Thread.java:843) Build Log: [...truncated 11609 lines...] [junit4] Suite: org.apache.solr.handler.TestReplicationHandler [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_AE245FBB8C776F79-001/init-core-data-001 [junit4] 2> 1031064 INFO (SUITE-TestReplicationHandler-seed#[AE245FBB8C776F79]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None) [junit4] 2> 1031065 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.a.s.SolrTestCaseJ4 ###Starting doTestDetails [junit4] 2> 1031066 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.a.s.SolrTestCaseJ4 Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.TestReplicationHandler_AE245FBB8C776F79-001/solr-instance-001/collection1 [junit4] 2> 1031072 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 1031072 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@70fd2436{/solr,null,AVAILABLE} [junit4] 2> 1031073 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.e.j.s.ServerConnector Started ServerConnector@4ee3253b{HTTP/1.1,[http/1.1]}{127.0.0.1:38319} [junit4] 2> 1031073 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.e.j.s.Server Started @1033951ms [junit4] 2> 1031073 INFO (TEST-TestReplicationHandler.doTestDetails-seed#[AE245FBB8C776F79]) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr,
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 1241 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1241/ Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CleanupOldIndexTest Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([7CF7B158CD7FF658]:0) FAILED: org.apache.solr.cloud.CleanupOldIndexTest.test Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([7CF7B158CD7FF658]:0) Build Log: [...truncated 12456 lines...] [junit4] Suite: org.apache.solr.cloud.CleanupOldIndexTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CleanupOldIndexTest_7CF7B158CD7FF658-001/init-core-data-001 [junit4] 2> 17066 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 17069 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 17069 INFO (Thread-28) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 17069 INFO (Thread-28) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 17169 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.ZkTestServer start zk server on port:40799 [junit4] 2> 17169 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 17170 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 17172 INFO (zkCallback-22-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@6793bdd0 name:ZooKeeperConnection Watcher:127.0.0.1:40799 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 17172 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 17172 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 17172 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml [junit4] 2> 17173 INFO (SUITE-CleanupOldIndexTest-seed#[7CF7B158CD7FF658]-worker) [] o.a.s.c.c.SolrZkClient makePath: /solr/clusterprops.json [junit4] 2> 17177 INFO (jetty-launcher-21-thread-2) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 17178 INFO (jetty-launcher-21-thread-2) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@3c1b479a{/solr,null,AVAILABLE} [junit4] 2> 17178 INFO (jetty-launcher-21-thread-1) [] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 17179 INFO (jetty-launcher-21-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@574a1622{/solr,null,AVAILABLE} [junit4] 2> 17182 INFO (jetty-launcher-21-thread-2) [] o.e.j.s.ServerConnector Started ServerConnector@554996cc{SSL,[ssl, http/1.1]}{127.0.0.1:40622} [junit4] 2> 17182 INFO (jetty-launcher-21-thread-2) [] o.e.j.s.Server Started @18972ms [junit4] 2> 17182 INFO (jetty-launcher-21-thread-2) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=40622} [junit4] 2> 17182 INFO (jetty-launcher-21-thread-1) [] o.e.j.s.ServerConnector Started ServerConnector@3697919e{SSL,[ssl, http/1.1]}{127.0.0.1:43696} [junit4] 2> 17182 INFO (jetty-launcher-21-thread-1) [] o.e.j.s.Server Started @18972ms [junit4] 2> 17183 INFO (jetty-launcher-21-thread-2) [] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): sun.misc.Launcher$AppClassLoader@73d16e93 [junit4] 2> 17183 INFO (jetty-launcher-21-thread-1) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=43696} [junit4] 2> 17183 INFO (jetty-launcher-21-thread-2) [] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.CleanupOldIndexTest_7CF7B158CD7FF658-001/tempDir-001/node2' [junit4] 2> 17183 INFO (jetty-launcher-21-thread-2) [] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx) [junit4] 2> 17183 INFO (jetty-launcher-21-thread-1) [] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init():
[jira] [Created] (LUCENE-7389) Validation issue in FieldType#setDimensions?
Martijn van Groningen created LUCENE-7389: - Summary: Validation issue in FieldType#setDimensions? Key: LUCENE-7389 URL: https://issues.apache.org/jira/browse/LUCENE-7389 Project: Lucene - Core Issue Type: Bug Reporter: Martijn van Groningen It compares if the {{dimensionCount}} is larger than {{PointValues.MAX_NUM_BYTES}} while this constant should be compared to {{dimensionNumBytes}} instead? So this if statement: {noformat} if (dimensionCount > PointValues.MAX_NUM_BYTES) { throw new IllegalArgumentException("dimensionNumBytes must be <= " + PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); } {noformat} Should be: {noformat} if (dimensionNumBytes > PointValues.MAX_NUM_BYTES) { throw new IllegalArgumentException("dimensionNumBytes must be <= " + PointValues.MAX_NUM_BYTES + "; got " + dimensionNumBytes); } {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3415) web gui: Show urls for every command, not just the query
[ https://issues.apache.org/jira/browse/SOLR-3415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-3415. - Resolution: Won't Fix > web gui: Show urls for every command, not just the query > > > Key: SOLR-3415 > URL: https://issues.apache.org/jira/browse/SOLR-3415 > Project: Solr > Issue Type: Improvement > Components: web gui >Reporter: Lance Norskog > > It is very very helpful to see all of the different calls made by the old UI. > The query box has a handy 'show the http' box. Please make this common to all > of the pages, and also show the Ajax calls. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389155#comment-15389155 ] ASF subversion and git services commented on SOLR-9323: --- Commit 0ad365cbd069230bc638684b30bc4dc338e3a66d in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ad365c ] SOLR-9323: Expose ClusterSate.getCollectionStates which returns unverified list of collection names > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-9323: - Summary: Expose ClusterSate.getCollectionStates which returns unverified list of collection names (was: Add a new method getCollectionNamesFast() to ClusterSate which returns unverified list of collection names) > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-9323: - Attachment: SOLR-9323.patch > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-9323. -- Resolution: Fixed Fix Version/s: 6.2 > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 6.2 > > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-9323: - Priority: Trivial (was: Major) > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Trivial > Fix For: 6.2 > > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3061) NPE exception when accessing Solr with faulty field config
[ https://issues.apache.org/jira/browse/SOLR-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-3061. - Resolution: Invalid > NPE exception when accessing Solr with faulty field config > -- > > Key: SOLR-3061 > URL: https://issues.apache.org/jira/browse/SOLR-3061 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.0-ALPHA >Reporter: Sami Siren >Priority: Minor > > If there's an mistake in fields type, for example, I see this ugly page when > I enter solr url in my browser: > {code} > java.lang.NullPointerException > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > {code} > Perhaps a message telling the user that there is something wrong with the > configuration and suggestion to see logs for more info would be more helpful. > This is most likely related to changes made in SOLR-3032. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3986) index version and generation not changed in admin UI after delete by query on master
[ https://issues.apache.org/jira/browse/SOLR-3986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-3986. - Resolution: Cannot Reproduce > index version and generation not changed in admin UI after delete by query on > master > > > Key: SOLR-3986 > URL: https://issues.apache.org/jira/browse/SOLR-3986 > Project: Solr > Issue Type: Bug > Components: web gui >Affects Versions: 4.0 >Reporter: Bill Au >Priority: Minor > > Here are the steps to reproduce this: > - follow steps in Solr 4.0 tutorial to set up a master and a slave to use > Java/HTTP replication > - index example documents on master: > java -jar post.jar *.xml > - make a note of the index version and generation the on both the replication > section of the summary screen of core collection1 and the replication screen > on both the master and slave > - run a delete by query on the master > java -Ddata=args -jar post.jar "name:DDR" > - on master reload the summary screen for core collection1. The Num Docs > field decreased but the index version and generation are unchanged in the > replication section. The index version and generation are also unchanged in > the replication screen. > - on the slave, wait for replication to kick in or trigger it manually. On > the summary screen for core collection1, the Num DOcs field decreased to > match what's on the master. The index version and generation of the master > remain unchanged but the index version and generation of the slave both > changed. The same goes for the index version and generation of the master > and slave on the replication screen. > The replication handler on the master does report changed index version and > generation: > localhost:8983/solr/collection1/replication?command=indexversion > It is only the admin UI that reporting the older index version and generation > on both the core summary screen and replication screen. > This only happens with delete by query. There is no problem with delete with > id or add. > Both the index version and generation do get updated on subsequent delete by > query but both remain one cycle behind on the master. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4157) Add more conventional search functionality to the Admin UI
[ https://issues.apache.org/jira/browse/SOLR-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-4157. - Resolution: Implemented > Add more conventional search functionality to the Admin UI > -- > > Key: SOLR-4157 > URL: https://issues.apache.org/jira/browse/SOLR-4157 > Project: Solr > Issue Type: Improvement > Components: web gui >Reporter: Upayavira >Priority: Minor > Attachments: SOLR-4157.patch > > > The admin UI has a 'query' pane which allows searching the index. However, > this is currently an 'expert' level feature, as you must specify exact > request parameters and interpret output XML or JSON. > I suggest we add simple versions of each. A simple query pane would give a > more conventional search interface for running queries. A simple results pane > would give HTML formatted results with features to nicely display > hightlighting, explains, etc. > To give an idea of what this might look like, I've attached a rudimentary > patch that gives an HTML option for wt which formats the query results as > (somewhat minimal) HTML. > The challenge will be in producing a search interface that is schema > agnostic, as to be really useful, it should work with any index, and not just > with the fields in the default schema (perhaps Erik Hatcher is right, this > should be backed by the velocityResponseWriter). > Thoughts welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9323) Expose ClusterSate.getCollectionStates which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389158#comment-15389158 ] ASF subversion and git services commented on SOLR-9323: --- Commit d9bbd70ff12bff4c3b131dd6d5352f1027427d03 in lucene-solr's branch refs/heads/branch_6x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d9bbd70 ] SOLR-9323: Expose ClusterSate.getCollectionStates which returns unverified list of collection names > Expose ClusterSate.getCollectionStates which returns unverified list of > collection names > - > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Attachments: SOLR-9323.patch > > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 17328 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17328/ Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'CY val' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B val", "":{"v":0}, from server: https://127.0.0.1:45099/collection1 Stack Trace: java.lang.AssertionError: Could not get expected value 'CY val' for path 'response/params/y/c' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{ "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B val", "":{"v":0}, from server: https://127.0.0.1:45099/collection1 at __randomizedtesting.SeedInfo.seed([692A8FF5AC8F8C34:E17EB02F0273E1CC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:159) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7280) Load cores in sorted order and tweak coreLoadThread counts to improve cluster stability on restarts
[ https://issues.apache.org/jira/browse/SOLR-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388992#comment-15388992 ] Noble Paul commented on SOLR-7280: -- It should work even if there is only one thread and many replicas. The idea is to sort your cores first in such a way that you prioritize replicas others are waiting for and deprioritize cores which depend on other 'down' nodes. So, this node will should not timeout. > Load cores in sorted order and tweak coreLoadThread counts to improve cluster > stability on restarts > --- > > Key: SOLR-7280 > URL: https://issues.apache.org/jira/browse/SOLR-7280 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul > Fix For: 6.2, 5.5.3 > > Attachments: SOLR-7280-5x.patch, SOLR-7280-5x.patch, > SOLR-7280-5x.patch, SOLR-7280-test.patch, SOLR-7280.patch, SOLR-7280.patch > > > In SOLR-7191, Damien mentioned that by loading solr cores in a sorted order > and tweaking some of the coreLoadThread counts, he was able to improve the > stability of a cluster with thousands of collections. We should explore some > of these changes and fold them into Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-2032) Map-viewer demo of SolrSpatial test data
[ https://issues.apache.org/jira/browse/SOLR-2032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15389011#comment-15389011 ] Alexandre Rafalovitch commented on SOLR-2032: - Can we close this? We have Google maps in /browse example. If we want something bigger it would use new field types and API and will have no real connection to this case. > Map-viewer demo of SolrSpatial test data > > > Key: SOLR-2032 > URL: https://issues.apache.org/jira/browse/SOLR-2032 > Project: Solr > Issue Type: Improvement > Components: web gui >Reporter: Lance Norskog > Attachments: SOLR-2032.patch > > > Simple demo that shows off the location data in the Solr example electronics > store. Uses the OpenLayers graphic/mapping javascript library. Search for > anything in the store and click on the red diamond. > This code is flaky on IE, sometimes works, sometimes does not. > There is no spot in the code base for UI demos. > The OpenLayers license is not known to be compatible with Apache at this > point, and contributing one project's code into another is rather dubious. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9323) Add a new method getCollectionNamesFast() to ClusterSate which returns unverified list of collection names
[ https://issues.apache.org/jira/browse/SOLR-9323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388996#comment-15388996 ] Noble Paul commented on SOLR-9323: -- > Add a new method getCollectionNamesFast() to ClusterSate which returns > unverified list of collection names > -- > > Key: SOLR-9323 > URL: https://issues.apache.org/jira/browse/SOLR-9323 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > Currently the {{getCollections()}} method is deprecated and everyone is > forced to use {{getCollectionsMap()}} which could be extremely expensive > depending on the no:of collections. The {{getCollectionsMap()}} method is > very expensive and should never be invoked on each request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org