Re: Status of Solr Ref Guide for 6.2
There's been no update from infra on this yet. So we're still waiting for them to help us out here before we can cut an RC I will be on PTO most of next week and won't be having any internet connectivity so Cassandra said she would take over the role of release manager On Wed, Aug 31, 2016 at 8:38 PM, Cassandra Targett wrote: > I filed https://issues.apache.org/jira/browse/INFRA-12538 for this. > Hopefully they'll be able to take a look soon. > > On Wed, Aug 31, 2016 at 1:17 AM, Varun Thacker wrote: > > Hi Cassandra, > > > > I checked right now and the logo still seems to be missing. > > > > On Wed, Aug 31, 2016 at 1:09 AM, Cassandra Targett < > casstarg...@gmail.com> > > wrote: > >> > >> OK, great then. > >> > >> Another issue, however, is the PDF seems to be missing the logo on the > >> title page. This wasn't a problem for the last release (I checked it > >> again to be sure). > >> > >> I checked the location of the logo image file, and it is still > >> correct. I also checked the intermediate HTML and it contains the > >> correct reference to the file location. However, the HTML refers to > >> the online image, so when I view that I see the logo because it's > >> accessible to me. Somehow it's not getting downloaded to be included > >> with the PDF when that is generated. > >> > >> The PDF seemed to take a very long time to generate (I did it 2x), so > >> it's possible there is some network issue causing a problem. I > >> recommend waiting a few hours, or until tomorrow morning, and checking > >> again. If it's still a problem, we should raise an issue with INFRA > >> for investigation. > >> > >> On Tue, Aug 30, 2016 at 8:17 AM, Varun Thacker > wrote: > >> > Hi Cassandra, > >> > > >> > Uwe's already made the changes . > >> > > >> > > >> > Regarding the RC: > >> > I had a doubt when I was documenting SOLR-9038 so I'll wait to hear > back > >> > on > >> > the Jira. Hopefully I can wrap that up soon and then cut an RC today > >> > > >> > On Tue, Aug 30, 2016 at 6:44 PM, Cassandra Targett > >> > > >> > wrote: > >> >> > >> >> Hey Varun, > >> >> > >> >> You may want to email Uwe directly without cc'ing the list. He's said > >> >> in the past that he missed the requests because of his email > >> >> filtering, and it's better to email him directly so it doesn't get > >> >> lost. > >> >> > >> >> On Tue, Aug 30, 2016 at 3:45 AM, Varun Thacker > >> >> wrote: > >> >> > Hi Uwe, > >> >> > > >> >> > Could you please update the CWIKI Javadoc macro links to point to > the > >> >> > 6_2_0 > >> >> > paths. > >> >> > > >> >> > I'll start the release process for the ref guide soon afterwards. > >> >> > > >> >> > On Tue, Aug 30, 2016 at 2:06 AM, Joel Bernstein < > joels...@gmail.com> > >> >> > wrote: > >> >> >> > >> >> >> Hi Varun, > >> >> >> > >> >> >> I didn't get a chance yet to document the new streaming > expressions. > >> >> >> I'm > >> >> >> on vacation this week. I'm planning on updating the docs early > next > >> >> >> week. If > >> >> >> the docs release before then, people can read about the new > >> >> >> streaming > >> >> >> expressions online. > >> >> >> > >> >> >> > >> >> >> On Aug 29, 2016 10:13 AM, "Varun Thacker" > wrote: > >> >> >>> > >> >> >>> Hi Everyone, > >> >> >>> > >> >> >>> I think the majority of the changes are in place. I will try to > >> >> >>> make > >> >> >>> the > >> >> >>> necessary changes required for SOLR-9187/SOLR-9038/SOLR-9243 > later > >> >> >>> today. > >> >> >>> > >> >> >>> I'm not sure where in > >> >> >>> https://cwiki.apache.org/confluence/display/solr/Using+SolrJ > can we > >> >> >>> document > >> >> >>> SOLR-9090 . It seems like we need to beef up the SolrJ page in > >> >> >>> general? > >> >> >>> > >> >> >>> Joel, have you been able to add the new features in streaming > >> >> >>> expressions > >> >> >>> to the ref guide? I can help review it > >> >> >>> > >> >> >>> I will aim to create an RC tomorrow morning in IST hours. > >> >> >>> > >> >> >>> On Thu, Aug 25, 2016 at 12:30 PM, Varun Thacker < > va...@vthacker.in> > >> >> >>> wrote: > >> >> > >> >> Hi Cassandra, > >> >> > >> >> I can volunteer to be the RM for the ref guide. > >> >> > >> >> We probably won't get to all the TODOs but I think let's start > >> >> working > >> >> on it for the next few days. > >> >> > >> >> If its fine we can cut an RC on Monday 29th August and then have > >> >> the > >> >> ref > >> >> guide released later in the week. > >> >> > >> >> On Thu, Aug 25, 2016 at 11:23 AM, Noble Paul > >> >> > >> >> wrote: > >> >> > > >> >> > I shall document my changes today itself > >> >> > > >> >> > On Thu, Aug 25, 2016 at 3:39 AM, Joel Bernstein > >> >> > > >> >> > wrote: > >> >> > > Hi Cassandra, > >> >> > > > >> >> > > I'm also behind on documentation for this release and on > >> >> > > vacation > >> >> > > next week. > >> >> > > But I will attempt to make progress on
[jira] [Updated] (SOLR-5725) Efficient facets without counts for enum method
[ https://issues.apache.org/jira/browse/SOLR-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-5725: --- Attachment: SOLR-5725.patch > Efficient facets without counts for enum method > --- > > Key: SOLR-5725 > URL: https://issues.apache.org/jira/browse/SOLR-5725 > Project: Solr > Issue Type: Improvement > Components: search >Reporter: Alexey Kozhemiakin >Assignee: Mikhail Khludnev > Fix For: master (7.0), 6.3 > > Attachments: SOLR-5725-5x.patch, SOLR-5725-master.patch, > SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, > SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch > > > Shot version: > This improves performance for facet.method=enum when it's enough to know that > facet count>0, for example when you it's when you dynamically populate > filters on search form. New method checks if two bitsets intersect instead of > counting intersection size. > Long version: > We have a dataset containing hundreds of millions of records, we facet by > dozens of fields with many of facet-excludes and have relatively small number > of unique values in fields, around thousands. > Before executing search, users work with "advanced search" form, our goal is > to populate dozens of filters with values which are applicable with other > selected values, so basically this is a use case for facets with mincount=1, > but without need in actual counts. > Our performance tests showed that facet.method=enum works much better than > fc\fcs, probably due to a specific ratio of "docset"\"unique terms count". > For example average execution of query time with method fc=1500ms, fcs=2600ms > and with enum=280ms. Profiling indicated the majority time for enum was spent > on intersecting docsets. > Hers's a patch that introduces an extension to facet calculation for > method=enum. Basically it uses docSetA.intersects(docSetB) instead of > docSetA. intersectionSize (docSetB). > As a result we were able to reduce our average query time from 280ms to 60ms. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.5-Windows (32bit/jdk1.8.0_102) - Build # 108 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/108/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud Error Message: 2 threads leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=47485, name=SolrConfigHandler-refreshconf, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:211) 2) Thread[id=48518, name=Thread-7022, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=47485, name=SolrConfigHandler-refreshconf, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.handler.SolrConfigHandler$Command$1.run(SolrConfigHandler.java:211) 2) Thread[id=48518, name=Thread-7022, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480) at __randomizedtesting.SeedInfo.seed([B0490D26DBB83059]:0) Build Log: [...truncated 12232 lines...] [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J1\temp\solr.handler.TestSolrConfigHandlerCloud_B0490D26DBB83059-001\init-core-data-001 [junit4] 2> 2816601 INFO (SUITE-TestSolrConfigHandlerCloud-seed#[B0490D26DBB83059]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) [junit4] 2> 2816601 INFO (SUITE-TestSolrConfigHandlerCloud-seed#[B0490D26DBB83059]-worker) [] o.a.s.BaseDistributedSe
[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 366 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/366/ Java: 32bit/jdk1.7.0_80 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading Error Message: Could not get expected value 'X val changed' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "path":"/test1", "webapp":"/wqj", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":"X val"} Stack Trace: java.lang.AssertionError: Could not get expected value 'X val changed' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "path":"/test1", "webapp":"/wqj", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":"X val"} at __randomizedtesting.SeedInfo.seed([50EB9EBCEB400688:88A6B3EB1C9DA328]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:480) at org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:255) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertio
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3520 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3520/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 65348 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1957019769 [ecj-lint] Compiling 974 source files to /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1957019769 [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheRead
[JENKINS] Lucene-Solr-Tests-6.x - Build # 442 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/442/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestRequestForwarding Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestRequestForwarding: 1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-96520513629257737-127.0.0.1:53296_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestRequestForwarding: 1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-96520513629257737-127.0.0.1:53296_solr-n_02, state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([2FC5452D3A987335]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestRequestForwarding Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-96520513629257737-127.0.0.1:53296_solr-n_02, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=211, name=OverseerHdfsCoreFailoverThread-96520513629257737-127.0.0.1:53296_solr-n_02, state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([2FC5452D3A987335]:0) Build Log: [...truncated 10671 lines...] [junit4] Suite: org.apache.solr.cloud.TestRequestForwarding [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.cloud.TestRequestForwarding_2FC5452D3A987335-001/init-core-data-001 [junit4] 2> 49235 INFO (SUITE-TestRequestForwarding-seed#[2FC5452D3A987335]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=None) [junit4] 2> 49237 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.SolrTestCaseJ4 ###Starting testMultiCollectionQuery [junit4] 2> 49246 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 49249 INFO (Thread-19) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 49249 INFO (Thread-19) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 49348 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.ZkTestServer start zk server on port:43299 [junit4] 2> 49350 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 49399 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 49444 INFO (zkCallback-10-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@6630dfc5 name:ZooKeeperConnection Watcher:127.0.0.1:43299 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 49444 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 49445 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 49445 INFO (TEST-TestRequestForwarding.testMultiCollectionQuery-seed#[2FC5452D3A987335]) [ ] o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml [junit4] 2> 49481 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] o.a.z.s.NIOServerCnxn caught end of stream exception [junit4] 2> EndOfStreamException: Unable to read additional data from client ses
[jira] [Commented] (SOLR-9319) DELETEREPLICA should be able to accept just count and remove replicas intelligenty
[ https://issues.apache.org/jira/browse/SOLR-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457432#comment-15457432 ] ASF subversion and git services commented on SOLR-9319: --- Commit 4b8f574418770f6872b7d3cbfca6bc028a910426 in lucene-solr's branch refs/heads/branch_6x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4b8f574 ] SOLR-9319: DELETEREPLICA can accept a 'count' and remove appropriate replicas > DELETEREPLICA should be able to accept just count and remove replicas > intelligenty > --- > > Key: SOLR-9319 > URL: https://issues.apache.org/jira/browse/SOLR-9319 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul > Fix For: 6.1 > > Attachments: DeleteReplicaPatch.jpg, Delete_Replica_count_1.jpg, > Delete_Replica_invalid.jpg, Delete_Replica_with_only_1replica.jpg, > Delete_replica_count2.jpg, Delte_replica_after.jpg, Delte_replica_before.jpg, > Old_deletereplica_api_works.jpg, SOLR-9310.patch, SOLR-9319.patch, > SOLR-9319.patch, SOLR-9319.patch, SOLR-9319.patch, Screen Shot 2016-08-26 at > 12.59.16 PM.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9319) DELETEREPLICA should be able to accept just count and remove replicas intelligenty
[ https://issues.apache.org/jira/browse/SOLR-9319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457429#comment-15457429 ] ASF subversion and git services commented on SOLR-9319: --- Commit e203c9af95461216d9ff39a108c86c5ce4308f5f in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e203c9a ] SOLR-9319: DELETEREPLICA can accept a 'count' and remove appropriate replicas > DELETEREPLICA should be able to accept just count and remove replicas > intelligenty > --- > > Key: SOLR-9319 > URL: https://issues.apache.org/jira/browse/SOLR-9319 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Noble Paul > Fix For: 6.1 > > Attachments: DeleteReplicaPatch.jpg, Delete_Replica_count_1.jpg, > Delete_Replica_invalid.jpg, Delete_Replica_with_only_1replica.jpg, > Delete_replica_count2.jpg, Delte_replica_after.jpg, Delte_replica_before.jpg, > Old_deletereplica_api_works.jpg, SOLR-9310.patch, SOLR-9319.patch, > SOLR-9319.patch, SOLR-9319.patch, SOLR-9319.patch, Screen Shot 2016-08-26 at > 12.59.16 PM.png > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9389) HDFS Transaction logs stay open for writes which leaks Xceivers
[ https://issues.apache.org/jira/browse/SOLR-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457410#comment-15457410 ] David Smiley commented on SOLR-9389: 100 shards for any one collection isn't massive (although I do think it's high)... I mean the total number of shards you run per box. I can see now you keep your shard size low, which makes it more feasible; and no doubt you have tons of RAM. Most people go for bigger shards and fewer number of them, rather than small numerous shards. A factor enabling you to do this is that your application allows for the very effective use of composite key doc routing. Nonetheless I'm sure there's a high java heap overhead per-shard at these numbers, and it'd be nice to bring it down from the stratosphere :-) > HDFS Transaction logs stay open for writes which leaks Xceivers > --- > > Key: SOLR-9389 > URL: https://issues.apache.org/jira/browse/SOLR-9389 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Hadoop Integration, hdfs >Affects Versions: 6.1, master (7.0) >Reporter: Tim Owen >Assignee: Mark Miller > Fix For: master (7.0), 6.3 > > Attachments: SOLR-9389.patch > > > The HdfsTransactionLog implementation keeps a Hadoop FSDataOutputStream open > for its whole lifetime, which consumes two threads on the HDFS data node > server (dataXceiver and packetresponder) even once the Solr tlog has finished > being written to. > This means for a cluster with many indexes on HDFS, the number of Xceivers > can keep growing and eventually hit the limit of 4096 on the data nodes. It's > especially likely for indexes that have low write rates, because Solr keeps > enough tlogs around to contain 100 documents (up to a limit of 10 tlogs). > There's also the issue that attempting to write to a finished tlog would be a > major bug, so closing it for writes helps catch that. > Our cluster during testing had 100+ collections with 100 shards each, spread > across 8 boxes (each running 4 solr nodes and 1 hdfs data node) and with 3x > replication for the tlog files, this meant we hit the xceiver limit fairly > easily and had to use the attached patch to ensure tlogs were closed for > writes once finished. > The patch introduces an extra lifecycle state for the tlog, so it can be > closed for writes and free up the HDFS resources, while still being available > for reading. I've tried to make it as unobtrusive as I could, but there's > probably a better way. I have not changed the behaviour of the local disk > tlog implementation, because it only consumes a file descriptor regardless of > read or write. > nb We have decided not to use Solr-on-HDFS now, we're using local disk (for > various reasons). So I don't have a HDFS cluster to do further testing on > this, I'm just contributing the patch which worked for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+132) - Build # 17743 - Still unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17743/ Java: 32bit/jdk-9-ea+132 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasics Error Message: IOException occured when talking to server at: http://127.0.0.1:39382/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:39382/solr at __randomizedtesting.SeedInfo.seed([6F2EF337BC7E7C8A:52F65D1B849022FA]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:622) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:261) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:250) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:415) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:367) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1280) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:116) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_102) - Build # 429 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/429/ Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 65580 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj681434376 [ecj-lint] Compiling 973 source files to C:\Users\jenkins\AppData\Local\Temp\ecj681434376 [ecj-lint] invalid Class-Path header in manifest of jar file: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnc
[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.8.0_102) - Build # 365 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/365/ Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 0 object(s) that were not released!!! [MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 0 object(s) that were not released!!! [MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([92F9D8EC820FF319]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10639 lines...] [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build/solr-core/test/J2/temp/solr.schema.TestManagedSchemaAPI_92F9D8EC820FF319-001/init-core-data-001 [junit4] 2> 5521 INFO (SUITE-TestManagedSchemaAPI-seed#[92F9D8EC820FF319]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) [junit4] 2> 5523 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.SolrTestCaseJ4 ###Starting test [junit4] 2> 5533 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 5535 INFO (Thread-23) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 5535 INFO (Thread-23) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 5635 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.c.ZkTestServer start zk server on port:45746 [junit4] 2> 5651 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 5743 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 5769 INFO (zkCallback-8-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@6e80e9b3 name:ZooKeeperConnection Watcher:127.0.0.1:45746 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 5769 INFO (TEST-TestManagedSchemaAPI.test-seed#[92F9D8EC820FF319]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 5769 INFO (TEST-TestManagedSchemaAPI.test-seed#[
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_102) - Build # 6093 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6093/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC 5 tests failed. FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasics Error Message: IOException occured when talking to server at: http://127.0.0.1:51429/solr/testSolrCloudCollection_shard1_replica2 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException occured when talking to server at: http://127.0.0.1:51429/solr/testSolrCloudCollection_shard1_replica2 at __randomizedtesting.SeedInfo.seed([A74CAB7CA6B000B8:9A9405509E5E5EC8]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.security.BasicAuthIntegrationTest.doExtraTests(BasicAuthIntegrationTest.java:193) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testCollectionCreateSearchDelete(TestMiniSolrCloudClusterBase.java:196) at org.apache.solr.cloud.TestMiniSolrCloudClusterBase.testBasics(TestMiniSolrCloudClusterBase.java:79) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lu
[JENKINS] Lucene-Solr-Tests-5.5 - Build # 6 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/6/ 4 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Fri Sep 02 06:59:48 BDT 2016 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Fri Sep 02 06:59:48 BDT 2016 at __randomizedtesting.SeedInfo.seed([B3CCAA65380EEADE:6867AAA33D26836D]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1501) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:853) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.schema.TestManagedSchemaAPI.test Error Message: Error from server at http://127.0.0.1:46087/solr/testschemaapi_shard1_replica2: ERROR: [doc=2] u
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_102) - Build # 1646 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1646/ Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 65548 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /tmp/ecj388657940 [ecj-lint] Compiling 973 source files to /tmp/ecj388657940 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint] ^^^
[jira] [Commented] (SOLR-9444) Fix path usage for cloud backup/restore
[ https://issues.apache.org/jira/browse/SOLR-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457052#comment-15457052 ] Hrishikesh Gadre commented on SOLR-9444: [~varunthacker] [~thetaphi] I have updated the PR with above mentioned changes. Please take a look. > Fix path usage for cloud backup/restore > --- > > Key: SOLR-9444 > URL: https://issues.apache.org/jira/browse/SOLR-9444 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > Attachments: SOLR-9444.patch > > > As noted by Uwe on > https://issues.apache.org/jira/browse/SOLR-9242?focusedCommentId=15438925&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15438925 > the usage of URI#getPath is wrong. > Creating a Jira to track this better. More details to follow -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 370 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/370/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 65503 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /var/tmp/ecj708672016 [ecj-lint] Compiling 973 source files to /var/tmp/ecj708672016 [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, ca
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17742 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17742/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseParallelGC 3 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:35821/_/cl/c8n_1x3_lf_shard1_replica1] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:35821/_/cl/c8n_1x3_lf_shard1_replica1] at __randomizedtesting.SeedInfo.seed([98C06E1B6BDD7B4B:109451C1C52116B3]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com
[jira] [Updated] (SOLR-9467) Document Transformer to Remove Fields
[ https://issues.apache.org/jira/browse/SOLR-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gus Heck updated SOLR-9467: --- Attachment: SOLR-9467.patch patch vs 6_x > Document Transformer to Remove Fields > - > > Key: SOLR-9467 > URL: https://issues.apache.org/jira/browse/SOLR-9467 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SearchComponents - other >Affects Versions: 6.2 >Reporter: Gus Heck > Attachments: SOLR-9467.patch > > > Given that SOLR-3191 has become bogged down and inactive, evidently stuck in > low level details, and since I have wished several times for some way to just > get that one big field out of my results to improve transfer times without > making a big brittle list of all my other fields. I'd like to propose a > DocumentTransformer that accomplishes this. > It would look something like this: > {code}&fl=*,[fl.rm v="title"]{code} > Since removing one field with a known name is probably the most common case > I'd like to start by keeping this simple, and if further features like globs > or lists of fields are desired, subsequent Jira tickets can be opened to add > them. Not attached to specifics here, only looking to keep things simple and > solve the key use case. If you don't like fl.rm as a name for a transformer, > suggest a better one (for example). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9467) Document Transformer to Remove Fields
Gus Heck created SOLR-9467: -- Summary: Document Transformer to Remove Fields Key: SOLR-9467 URL: https://issues.apache.org/jira/browse/SOLR-9467 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Components: SearchComponents - other Affects Versions: 6.2 Reporter: Gus Heck Given that SOLR-3191 has become bogged down and inactive, evidently stuck in low level details, and since I have wished several times for some way to just get that one big field out of my results to improve transfer times without making a big brittle list of all my other fields. I'd like to propose a DocumentTransformer that accomplishes this. It would look something like this: {code}&fl=*,[fl.rm v="title"]{code} Since removing one field with a known name is probably the most common case I'd like to start by keeping this simple, and if further features like globs or lists of fields are desired, subsequent Jira tickets can be opened to add them. Not attached to specifics here, only looking to keep things simple and solve the key use case. If you don't like fl.rm as a name for a transformer, suggest a better one (for example). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 823 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/823/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 65345 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /var/tmp/ecj1466943411 [ecj-lint] Compiling 974 source files to /var/tmp/ecj1466943411 [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheRead
[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 3 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/3/ 5 tests failed. FAILED: org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:60618 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:60618 at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:586) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:400) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:516) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:996) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:971) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 387 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/387/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 65510 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1670381350 [ecj-lint] Compiling 973 source files to /var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1670381350 [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheRea
Re: Contacted by IBM about problems with Lucene/Solr on their JVM
On 9/1/2016 8:31 AM, Yonik Seeley wrote: > Without knowing more, it sounds like it was a personal email about > their opinion vs your personal opinion (I assume they didn't say they > were speaking for IBM?) This is what the message said: - A customer has pointed me at your page (https://wiki.apache.org/solr/ShawnHeisey) which suggests that the IBM JVM does not work well Solr/Lucene. I would love to have a chance to address any problems you might have encountered with the IBM JVM when running Solr/Lucene. If it is possible to describe how to recreate any problems you have found when running on the IBM JVM I would appreciate it. I am hoping to address some issues before any actual users of Solr/Lucene encounter those problems. - Minutes after I started this thread, he popped up on this issue: https://issues.apache.org/jira/browse/LUCENE-7432 If IBM will work with us to fix problems, then there would no reason for my page to recommend not using their JVM. Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456429#comment-15456429 ] Saar Carmi commented on LUCENE-7434: Thanks [~talli...@mitre.org] for creating this issue. In order to merge threads, I want to clarify that my original question was about limiting the search window as well as the number of matches. The slop parameter sets the maximum distance allowed between each of the subspans and I was looking to add another parameter for the maximum window in which multiple the sub spans should appear together - between the beginning of the first, to the beginning/end of the last one. > Add minNumberShouldMatch parameter to SpanNearQuery > --- > > Key: LUCENE-7434 > URL: https://issues.apache.org/jira/browse/LUCENE-7434 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Tim Allison >Priority: Minor > > On the user list, [~saar32] asked about a new type of SpanQuery that would > allow for something like BooleanQuery's minimumNumberShouldMatch > bq. Given a set of search terms (t1, t2, t3, ti), return all documents where > in a sequence of x=10 tokens at least c=3 of the search terms appear within > the sequence. > I _think_ we can modify SpanNearQuery fairly easily to accommodate this. > I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456419#comment-15456419 ] Tim Allison commented on LUCENE-7434: - Sorry, I've been away from Lucene for too long. Can you explain a bit more? > Add minNumberShouldMatch parameter to SpanNearQuery > --- > > Key: LUCENE-7434 > URL: https://issues.apache.org/jira/browse/LUCENE-7434 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Tim Allison >Priority: Minor > > On the user list, [~saar32] asked about a new type of SpanQuery that would > allow for something like BooleanQuery's minimumNumberShouldMatch > bq. Given a set of search terms (t1, t2, t3, ti), return all documents where > in a sequence of x=10 tokens at least c=3 of the search terms appear within > the sequence. > I _think_ we can modify SpanNearQuery fairly easily to accommodate this. > I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456405#comment-15456405 ] Mikhail Khludnev commented on LUCENE-7434: -- But this allow to create Span Disjunction Query, which is considered as a black sheep in Lucene herd. I don't know why exactly, but have an idea. > Add minNumberShouldMatch parameter to SpanNearQuery > --- > > Key: LUCENE-7434 > URL: https://issues.apache.org/jira/browse/LUCENE-7434 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Tim Allison >Priority: Minor > > On the user list, [~saar32] asked about a new type of SpanQuery that would > allow for something like BooleanQuery's minimumNumberShouldMatch > bq. Given a set of search terms (t1, t2, t3, ti), return all documents where > in a sequence of x=10 tokens at least c=3 of the search terms appear within > the sequence. > I _think_ we can modify SpanNearQuery fairly easily to accommodate this. > I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_102) - Build # 17741 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17741/ Java: 64bit/jdk1.8.0_102 -XX:+UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 65410 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /tmp/ecj1859813747 [ecj-lint] Compiling 974 source files to /tmp/ecj1859813747 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint] ^^^
Re: Contacted by IBM about problems with Lucene/Solr on their JVM
There's also https://issues.apache.org/jira/browse/LUCENE-7432 where code in a finally clause may sometimes not be executed in J9, but it looks like there is already a fix in the pipeline... Mike McCandless http://blog.mikemccandless.com On Thu, Sep 1, 2016 at 11:11 AM, David Smiley wrote: > BTW I noticed IBM has a Docker container for their Java: > https://www.ibm.com/developerworks/community/blogs/738b7897-cd38-4f24-9f05-48dd69116837/entry/Announcement_IBM_SDK_Java_Technology_Edition_s390x_and_ppc64le_Docker_Images_are_now_available_on_DockerHub?lang=en > That would be way more convenient than either obtaining a bulky VM or > installing it. > > On Thu, Sep 1, 2016 at 10:33 AM Alexandre Rafalovitch > wrote: >> >> I am holding onto SOLR-9383 where IBM MBean info is not the same as >> Sun's one (and messing up Admin UI). Once I get the replication VM, I >> was planning to do it by some sort of name mapping. But if that's >> something that IBM is supposed to fix instead, that could be nice too. >> >> Regards, >> Alex. >> >> Newsletter and resources for Solr beginners and intermediates: >> http://www.solr-start.com/ >> >> >> On 1 September 2016 at 20:58, Shawn Heisey wrote: >> > I was contacted on my Apache email address a few days ago by somebody at >> > IBM who wasn't exactly happy that my Solr wiki page recommends not using >> > their Java. >> > >> > https://wiki.apache.org/solr/ShawnHeisey >> > >> > He said that he'd like to address any problems. I replied that he >> > should join this list and work with those who really know about the >> > problems that we've encountered. That was three days ago and I haven't >> > seen anything here to indicate that my suggestion was followed. >> > >> > Before I drop his email address here without permission, I'd like to >> > know how to proceed. I'm not the right person on our end to discuss the >> > issue. >> > >> > I happened to notice SOLR-9179 a few minutes ago, where Solr tickles a >> > bug in IBM Java. Noble was able to implement a fix in our code. >> > >> > Thanks, >> > Shawn >> > >> > >> > - >> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> > For additional commands, e-mail: dev-h...@lucene.apache.org >> > >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > -- > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker > LinkedIn: http://linkedin.com/in/davidwsmiley | Book: > http://www.solrenterprisesearchserver.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Allison updated LUCENE-7434: Description: On the user list, [~saar32] asked about a new type of SpanQuery that would allow for something like BooleanQuery's minimumNumberShouldMatch bq. Given a set of search terms (t1, t2, t3, ti), return all documents where in a sequence of x=10 tokens at least c=3 of the search terms appear within the sequence. I _think_ we can modify SpanNearQuery fairly easily to accommodate this. I'll submit a PR in the next few days. was: On the user list, Saar Carmi asked about a new type of SpanQuery that would allow for something like BooleanQuery's minimumNumberShouldMatch bq. Given a set of search terms (t1, t2, t3, ti), return all documents where in a sequence of x=10 tokens at least c=3 of the search terms appear within the sequence. I _think_ we can modify SpanNearQuery fairly easily to accommodate this. I'll submit a PR in the next few days. > Add minNumberShouldMatch parameter to SpanNearQuery > --- > > Key: LUCENE-7434 > URL: https://issues.apache.org/jira/browse/LUCENE-7434 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Tim Allison >Priority: Minor > > On the user list, [~saar32] asked about a new type of SpanQuery that would > allow for something like BooleanQuery's minimumNumberShouldMatch > bq. Given a set of search terms (t1, t2, t3, ti), return all documents where > in a sequence of x=10 tokens at least c=3 of the search terms appear within > the sequence. > I _think_ we can modify SpanNearQuery fairly easily to accommodate this. > I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #75: LUCENE-7434, first draft
GitHub user tballison opened a pull request: https://github.com/apache/lucene-solr/pull/75 LUCENE-7434, first draft LUCENE-7434, first draft You can merge this pull request into a Git repository by running: $ git pull https://github.com/tballison/lucene-solr master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/75.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #75 commit c37f1e0d66f1f28a5c83033d9496cc33c55f265e Author: tballison Date: 2016-09-01T19:33:55Z LUCENE-7434, first draft --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456390#comment-15456390 ] ASF GitHub Bot commented on LUCENE-7434: GitHub user tballison opened a pull request: https://github.com/apache/lucene-solr/pull/75 LUCENE-7434, first draft LUCENE-7434, first draft You can merge this pull request into a Git repository by running: $ git pull https://github.com/tballison/lucene-solr master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/75.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #75 commit c37f1e0d66f1f28a5c83033d9496cc33c55f265e Author: tballison Date: 2016-09-01T19:33:55Z LUCENE-7434, first draft > Add minNumberShouldMatch parameter to SpanNearQuery > --- > > Key: LUCENE-7434 > URL: https://issues.apache.org/jira/browse/LUCENE-7434 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Reporter: Tim Allison >Priority: Minor > > On the user list, Saar Carmi asked about a new type of SpanQuery that would > allow for something like BooleanQuery's minimumNumberShouldMatch > bq. Given a set of search terms (t1, t2, t3, ti), return all documents where > in a sequence of x=10 tokens at least c=3 of the search terms appear within > the sequence. > I _think_ we can modify SpanNearQuery fairly easily to accommodate this. > I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9200) Add Delegation Token Support to Solr
[ https://issues.apache.org/jira/browse/SOLR-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456383#comment-15456383 ] Cassandra Targett commented on SOLR-9200: - [~gchanan] or [~ichattopadhyaya] - is the functionality described in this earlier comment https://issues.apache.org/jira/browse/SOLR-9200?focusedCommentId=15366913&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15366913 still accurate? This has not yet been added to the Solr Ref Guide, and since I think there is some interest for it, we should try to get it in while we're waiting for the issue with publishing 6.2 to be resolved. It belongs with the Kerberos documentation at https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin, correct? > Add Delegation Token Support to Solr > > > Key: SOLR-9200 > URL: https://issues.apache.org/jira/browse/SOLR-9200 > Project: Solr > Issue Type: New Feature > Components: security >Reporter: Gregory Chanan >Assignee: Gregory Chanan > Fix For: 6.2, master (7.0) > > Attachments: SOLR-9200.patch, SOLR-9200.patch, SOLR-9200.patch, > SOLR-9200.patch, SOLR-9200.patch, SOLR-9200_branch_6x.patch, > SOLR-9200_branch_6x.patch, SOLR-9200_branch_6x.patch > > > SOLR-7468 added support for kerberos authentication via the hadoop > authentication filter. Hadoop also has support for an authentication filter > that supports delegation tokens, which allow authenticated users the ability > to grab/renew/delete a token that can be used to bypass the normal > authentication path for a time. This is useful in a variety of use cases: > 1) distributed clients (e.g. MapReduce) where each client may not have access > to the user's kerberos credentials. Instead, the job runner can grab a > delegation token and use that during task execution. > 2) If the load on the kerberos server is too high, delegation tokens can > avoid hitting the kerberos server after the first request > 3) If requests/permissions need to be delegated to another user: the more > privileged user can request a delegation token that can be passed to the less > privileged user. > Note to self: > In > https://issues.apache.org/jira/browse/SOLR-7468?focusedCommentId=14579636&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14579636 > I made the following comment which I need to investigate further, since I > don't know if anything changed in this area: > {quote}3) I'm a little concerned with the "NoContext" code in KerberosPlugin > moving forward (I understand this is more a generic auth question than > kerberos specific). For example, in the latest version of the filter we are > using at Cloudera, we play around with the ServletContext in order to pass > information around > (https://github.com/cloudera/lucene-solr/blob/cdh5-4.10.3_5.4.2/solr/core/src/java/org/apache/solr/servlet/SolrHadoopAuthenticationFilter.java#L106). > Is there any way we can get the actual ServletContext in a plugin?{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Release Solr 5.5.3
I've been trying to the RC out since yesterday but our tests are holding me back. Once they pass, I'll have the RC out (should be in a few hours). On Mon, Aug 29, 2016 at 1:37 PM Anshum Gupta wrote: > Thanks Uwe. > > On Mon, Aug 29, 2016 at 10:47 AM Uwe Schindler wrote: > >> Hi Anshum, >> >> >> >> I will now enable the tests for 5.5 branch on Jenkins! >> >> >> >> Uwe >> >> >> >> - >> >> Uwe Schindler >> >> H.-H.-Meier-Allee 63, D-28213 Bremen >> >> http://www.thetaphi.de >> >> eMail: u...@thetaphi.de >> >> >> >> *From:* Anshum Gupta [mailto:ans...@anshumgupta.net] >> *Sent:* Monday, August 29, 2016 7:01 PM >> *To:* dev@lucene.apache.org >> *Subject:* Re: Release Solr 5.5.3 >> >> >> >> With SOLR-9310 out of the door and 6.2.0 out, this is a good time to >> resume the 5.5.3 release. Unless any one has any objections, I'll have an >> RC out on Wednesday. >> >> >> >> -Anshum >> >> >> >> On Mon, Aug 1, 2016 at 12:08 PM Anshum Gupta >> wrote: >> >> UPDATE: I'm just holding back to see if we can have a solution for >> SOLR-9310 and have # of upgrades for our users. If we don't have any >> clarity by Thursday, I'll start working on the release. >> >> >> >> On Thu, Jul 28, 2016 at 10:22 AM, Anshum Gupta >> wrote: >> >> I plan on cutting the RC later tonight or tomorrow, unless there are >> objections. >> >> >> >> @Noble: Can you comment on the status of SOLR-9310 (on the JIRA) and if >> it makes sense to hold 5.5.3 ? >> >> >> >> On Thu, Jul 21, 2016 at 5:00 AM, Pushkar Raste >> wrote: >> >> Can we also get SOLR-9310 in as well >> >> On Jul 20, 2016 6:28 PM, "Erick Erickson" >> wrote: >> >> I also would like to get SOLR-7280 in, Noble and I just checked it in >> to that branch as well. >> >> Erick >> >> On Wed, Jul 20, 2016 at 3:02 PM, Anshum Gupta >> wrote: >> > As Shai mentioned, it is actually about SSL + indexing requests leading >> to >> > unstable state in Solr. >> > >> > How quickly that state is reached is a function of # indexing requests. >> > Thanks for correcting me on that one :-). >> > >> > On Wed, Jul 20, 2016 at 1:35 PM, David Smiley > > >> > wrote: >> >> >> >> Okay. BTW SOLR-9290 isn't "Just" high indexing rates, but it's for >> those >> >> using SSL too -- correct me if I'm wrong. We don't want to raise alarm >> >> bells too loudly :-) >> >> >> >> On Wed, Jul 20, 2016 at 4:18 PM Anshum Gupta >> >> wrote: >> >>> >> >>> Hi, >> >>> >> >>> With SOLR-9290 fixed, I think it calls for a bug fix release as it >> >>> impacts all users with high indexing rates. >> >>> >> >>> If someone else wants to work on the release, I am fine with it else, >> >>> I'll be happy to be the RM and cut an RC a week from now. >> >>> >> >>> -- >> >>> Anshum Gupta >> >> >> >> -- >> >> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker >> >> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: >> >> http://www.solrenterprisesearchserver.com >> > >> > >> > >> > >> > -- >> > Anshum Gupta >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> >> >> >> >> -- >> >> Anshum Gupta >> >> >> >> >> >> -- >> >> Anshum Gupta >> >>
[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 363 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/363/ Java: 32bit/jdk1.7.0_80 -client -XX:+UseSerialGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([A3F55B0C9543EB14]:0) FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasics Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([A3F55B0C9543EB14]:0) Build Log: [...truncated 12514 lines...] [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest [junit4] 2> 2117770 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 2117770 INFO (Thread-5536) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 2117770 INFO (Thread-5536) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 2117870 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.ZkTestServer start zk server on port:36810 [junit4] 2> 2117870 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 2117871 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 2117899 INFO (zkCallback-2333-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@1041560 name:ZooKeeperConnection Watcher:127.0.0.1:36810 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 2117899 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 2117899 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 2117899 INFO (TEST-BasicAuthIntegrationTest.testBasics-seed#[A3F55B0C9543EB14]) [] o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml [junit4] 2> 2118016 INFO (jetty-launcher-2332-thread-1) [] o.e.j.s.Server jetty-9.2.13.v20150730 [junit4] 2> 2118016 INFO (jetty-launcher-2332-thread-3) [] o.e.j.s.Server jetty-9.2.13.v20150730 [junit4] 2> 2118017 INFO (jetty-launcher-2332-thread-5) [] o.e.j.s.Server jetty-9.2.13.v20150730 [junit4] 2> 2118017 INFO (jetty-launcher-2332-thread-4) [] o.e.j.s.Server jetty-9.2.13.v20150730 [junit4] 2> 2118017 INFO (jetty-launcher-2332-thread-2) [] o.e.j.s.Server jetty-9.2.13.v20150730 [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@1777eb6{/solr,null,AVAILABLE} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-5) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@2010b7{/solr,null,AVAILABLE} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-3) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@158a93e{/solr,null,AVAILABLE} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-5) [] o.e.j.s.ServerConnector Started ServerConnector@5dd9b{HTTP/1.1}{127.0.0.1:32897} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-4) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@763b90{/solr,null,AVAILABLE} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-5) [] o.e.j.s.Server Started @2119265ms [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-1) [] o.e.j.s.ServerConnector Started ServerConnector@117fd96{HTTP/1.1}{127.0.0.1:34957} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-3) [] o.e.j.s.ServerConnector Started ServerConnector@13236d1{HTTP/1.1}{127.0.0.1:37235} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-1) [] o.e.j.s.Server Started @2119265ms [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-3) [] o.e.j.s.Server Started @2119265ms [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-1) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, hostPort=34957} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-4) [] o.e.j.s.ServerConnector Started ServerConnector@17095ef{HTTP/1.1}{127.0.0.1:38125} [junit4] 2> 2118020 INFO (jetty-launcher-2332-thread-3) [] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostPort=37235, hostContext=/solr} [junit4] 2> 2118019 INFO (jetty-launcher-2332-thread-2) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHand
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456211#comment-15456211 ] Kevin Langman commented on LUCENE-7432: --- I have confirmed that this exception is an example of the problem described by APAR IV88620. > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) -
[jira] [Created] (LUCENE-7434) Add minNumberShouldMatch parameter to SpanNearQuery
Tim Allison created LUCENE-7434: --- Summary: Add minNumberShouldMatch parameter to SpanNearQuery Key: LUCENE-7434 URL: https://issues.apache.org/jira/browse/LUCENE-7434 Project: Lucene - Core Issue Type: Improvement Components: core/search Reporter: Tim Allison Priority: Minor On the user list, Saar Carmi asked about a new type of SpanQuery that would allow for something like BooleanQuery's minimumNumberShouldMatch bq. Given a set of search terms (t1, t2, t3, ti), return all documents where in a sequence of x=10 tokens at least c=3 of the search terms appear within the sequence. I _think_ we can modify SpanNearQuery fairly easily to accommodate this. I'll submit a PR in the next few days. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9389) HDFS Transaction logs stay open for writes which leaks Xceivers
[ https://issues.apache.org/jira/browse/SOLR-9389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456179#comment-15456179 ] Tim Owen commented on SOLR-9389: Thanks for the advice David, I'll take a look at the concurrency setting, we'll need to test out using fewer shards and see how that compares for our use-case. Since we create new collections weekly, we always have the option to increase the shard count later if we do hit situations of large merges happening. Although I'm a bit surprised that this model is considered 'truly massive' .. I'd have expected many large Solr installations will have thousands of shards across all their collections. > HDFS Transaction logs stay open for writes which leaks Xceivers > --- > > Key: SOLR-9389 > URL: https://issues.apache.org/jira/browse/SOLR-9389 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Hadoop Integration, hdfs >Affects Versions: 6.1, master (7.0) >Reporter: Tim Owen >Assignee: Mark Miller > Fix For: master (7.0), 6.3 > > Attachments: SOLR-9389.patch > > > The HdfsTransactionLog implementation keeps a Hadoop FSDataOutputStream open > for its whole lifetime, which consumes two threads on the HDFS data node > server (dataXceiver and packetresponder) even once the Solr tlog has finished > being written to. > This means for a cluster with many indexes on HDFS, the number of Xceivers > can keep growing and eventually hit the limit of 4096 on the data nodes. It's > especially likely for indexes that have low write rates, because Solr keeps > enough tlogs around to contain 100 documents (up to a limit of 10 tlogs). > There's also the issue that attempting to write to a finished tlog would be a > major bug, so closing it for writes helps catch that. > Our cluster during testing had 100+ collections with 100 shards each, spread > across 8 boxes (each running 4 solr nodes and 1 hdfs data node) and with 3x > replication for the tlog files, this meant we hit the xceiver limit fairly > easily and had to use the attached patch to ensure tlogs were closed for > writes once finished. > The patch introduces an extra lifecycle state for the tlog, so it can be > closed for writes and free up the HDFS resources, while still being available > for reading. I've tried to make it as unobtrusive as I could, but there's > probably a better way. I have not changed the behaviour of the local disk > tlog implementation, because it only consumes a file descriptor regardless of > read or write. > nb We have decided not to use Solr-on-HDFS now, we're using local disk (for > various reasons). So I don't have a HDFS cluster to do further testing on > this, I'm just contributing the patch which worked for us. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9175) classes referenced in schema.xml should also support loading from the blob store
[ https://issues.apache.org/jira/browse/SOLR-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15456123#comment-15456123 ] Noble Paul commented on SOLR-9175: -- Yes, only components specified in {{solrconfig.xml}} can be loaded from blob store. Schema components are not yet loaded from blob store. You should probably start with {{Indexschema.java}} > classes referenced in schema.xml should also support loading from the blob > store > > > Key: SOLR-9175 > URL: https://issues.apache.org/jira/browse/SOLR-9175 > Project: Solr > Issue Type: Improvement > Components: blobstore >Affects Versions: 5.4.1 >Reporter: King Rhoton > > It appears that only the Config API and solrconfig.xml support loading custom > classes from the Blob Store. It seems to me like any directive for a > collection which references a class attribute should also support loading > this class from the Blob Store via a runtimeLib="true" attribute. > The obvious use case here is custom analyzers, but similarity is also a > candidate. > The documentation in this area (eg. "add-runtimelib") is pretty vague. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 1383 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1383/ 2 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:42949/c8n_1x3_lf_shard1_replica2] Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:42949/c8n_1x3_lf_shard1_replica2] at __randomizedtesting.SeedInfo.seed([C50AEE012C6EB3D3:4D5ED1DB8292DE2B]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:769) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3519 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3519/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:58195/v_tqv/c8n_1x3_lf_shard1_replica1] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:58195/v_tqv/c8n_1x3_lf_shard1_replica1] at __randomizedtesting.SeedInfo.seed([1427EDBBBFC4B14E:9C73D2611138DCB6]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:3
[jira] [Closed] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
[ https://issues.apache.org/jira/browse/SOLR-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey closed SOLR-9466. -- Resolution: Invalid This is the wrong place for this discussion. Please move it to the mailing list or the IRC channel. If a discussion there determines that there's a bug in a current version, we can re-open the issue. Please don't think that I'm unwilling to help ... it just needs to happen in the correct place. I have some questions for you to answer about your installation. If you can go there in the next few hours, you'll find that I am reachable on IRC: https://wiki.apache.org/solr/IRCChannels > During concurrency some Solr document are not seen even after soft and hard > commit > -- > > Key: SOLR-9466 > URL: https://issues.apache.org/jira/browse/SOLR-9466 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 4.10.2 > Environment: Cent OS >Reporter: Ganesh >Priority: Critical > > Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in > each collection. > During concurrent usage of SOLR where both updates and search is sent to SOLR > server, some of our updates / adding of new documents are getting lost. > We could see that update hitting solr and we could see it in localhost_access > file of tomcat, also in catalina.out. But still we couldn't see that record > while searching. > Following is the catalina.out logs for the document which is getting indexed > properly. > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > processAdd > FINE: PRE_UPDATE > add{,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog > FINE: New TransactionLog > file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0220856, > exists=false, size=0, openExisting=false > Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit > FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ > retry:0 > add{_version_=1544254202941800448,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > params:update.distrib=FROMLEADER&distrib.from=http%3A%2F%2Fxx.xx.xx.xx%3A7070%2Fsolr%2FIOB_shard1_replica1%2F > Sep 01, 2016 7:39:31 AM > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run > FINE: starting runner: > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > FINE: PRE_UPDATE FINISH > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > Sep 01, 2016 7:39:31 AM > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run > FINE: finished: > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > INFO: [IOB_shard1_replica1] webapp=/solr path=/update > params={crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > {add=[CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301 > (1544254202941800448)]} 0 9 > Sep 01, 2016 7:39:31 AM org.apache.solr.servlet.SolrDispatchFilter doFilter > FINE: Closing out SolrRequest: > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > For the one which document is not getting indexed, we could see only > following log in catalina.out. Not sure whether it's getting added to SOLR. > Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > FINE: PRE_UPDATE FINISH > {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} > Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > INFO: [IOB_shard1_replica1] webapp=/solr path=/update > params={crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102} > {} 0 1 > Sep 01, 2016 7:39:56 AM org.apache.solr.servlet.SolrDispatchFilter doFilter > FINE: Closing out SolrRequest: > {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} > We have set autosoftcommit to 1 seconds and autohardcommit to 30 seconds. > We are not getting any errors or exceptions in the log. -- This mes
[jira] [Commented] (SOLR-9175) classes referenced in schema.xml should also support loading from the blob store
[ https://issues.apache.org/jira/browse/SOLR-9175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455872#comment-15455872 ] Rupendra Peddacama commented on SOLR-9175: -- I ran into the same issue with using a customer similarity class deployed into blobstore. I would like to contribute to resolve this issue. Could you provide some pointers as to next steps and areas affected in the codebase. Thanks. > classes referenced in schema.xml should also support loading from the blob > store > > > Key: SOLR-9175 > URL: https://issues.apache.org/jira/browse/SOLR-9175 > Project: Solr > Issue Type: Improvement > Components: blobstore >Affects Versions: 5.4.1 >Reporter: King Rhoton > > It appears that only the Config API and solrconfig.xml support loading custom > classes from the Blob Store. It seems to me like any directive for a > collection which references a class attribute should also support loading > this class from the Blob Store via a runtimeLib="true" attribute. > The obvious use case here is custom analyzers, but similarity is also a > candidate. > The documentation in this area (eg. "add-runtimelib") is pretty vague. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-6.x - Build # 441 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/441/ All tests passed Build Log: [...truncated 65537 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /tmp/ecj211888298 [ecj-lint] Compiling 973 source files to /tmp/ecj211888298 [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint] ^
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 428 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/428/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC All tests passed Build Log: [...truncated 63846 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj253176582 [ecj-lint] Compiling 973 source files to C:\Users\jenkins\AppData\Local\Temp\ecj253176582 [ecj-lint] invalid Class-Path header in manifest of jar file: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 2. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 3. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java (at line 101) [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> { [ecj-lint]^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\DeleteNodeCmd.java (at line 30) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 5. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 37) [ecj-lint] import org.apache.solr.common.params.CommonAdminParams; [ecj-lint]^^^ [ecj-lint] The import org.apache.solr.common.params.CommonAdminParams is never used [ecj-lint] -- [ecj-lint] 6. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 40) [ecj-lint] import org.apache.solr.common.util.StrUtils; [ecj-lint] [ecj-lint] The import org.apache.solr.common.util.StrUtils is never used [ecj-lint] -- [ecj-lint] 7. ERROR in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\ReplaceNodeCmd.java (at line 48) [ecj-lint] import static org.apache.solr.common.util.StrUtils.formatString; [ecj-lint] ^ [ecj-lint] The import org.apache.solr.common.util.StrUtils.formatString is never used [ecj-lint] -- [ecj-lint] -- [ecj-lint] 8. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 9. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] 10. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java (at line 213) [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> { [ecj-lint] ^^^ [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape analysis [ecj-lint] -- [ecj-lint] -- [ecj-lint] 11. WARNING in C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\core\HdfsDirectoryFactory.java (at line 227) [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, blockCacheReadEnabled, false, cacheMerges, cacheReadOnce); [ecj-lint]
[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_102) - Build # 1644 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1644/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:45861/forceleader_test_collection_shard1_replica1] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:45861/forceleader_test_collection_shard1_replica1] at __randomizedtesting.SeedInfo.seed([DFA9356469A3273E:393E01A45021DE5F]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.ForceLeaderTest.sendDoc(ForceLeaderTest.java:424) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:131) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rul
Re: Contacted by IBM about problems with Lucene/Solr on their JVM
BTW I noticed IBM has a Docker container for their Java: https://www.ibm.com/developerworks/community/blogs/738b7897-cd38-4f24-9f05-48dd69116837/entry/Announcement_IBM_SDK_Java_Technology_Edition_s390x_and_ppc64le_Docker_Images_are_now_available_on_DockerHub?lang=en That would be way more convenient than either obtaining a bulky VM or installing it. On Thu, Sep 1, 2016 at 10:33 AM Alexandre Rafalovitch wrote: > I am holding onto SOLR-9383 where IBM MBean info is not the same as > Sun's one (and messing up Admin UI). Once I get the replication VM, I > was planning to do it by some sort of name mapping. But if that's > something that IBM is supposed to fix instead, that could be nice too. > > Regards, > Alex. > > Newsletter and resources for Solr beginners and intermediates: > http://www.solr-start.com/ > > > On 1 September 2016 at 20:58, Shawn Heisey wrote: > > I was contacted on my Apache email address a few days ago by somebody at > > IBM who wasn't exactly happy that my Solr wiki page recommends not using > > their Java. > > > > https://wiki.apache.org/solr/ShawnHeisey > > > > He said that he'd like to address any problems. I replied that he > > should join this list and work with those who really know about the > > problems that we've encountered. That was three days ago and I haven't > > seen anything here to indicate that my suggestion was followed. > > > > Before I drop his email address here without permission, I'd like to > > know how to proceed. I'm not the right person on our end to discuss the > > issue. > > > > I happened to notice SOLR-9179 a few minutes ago, where Solr tickles a > > bug in IBM Java. Noble was able to implement a fix in our code. > > > > Thanks, > > Shawn > > > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
Re: Contacted by IBM about problems with Lucene/Solr on their JVM
I am holding onto SOLR-9383 where IBM MBean info is not the same as Sun's one (and messing up Admin UI). Once I get the replication VM, I was planning to do it by some sort of name mapping. But if that's something that IBM is supposed to fix instead, that could be nice too. Regards, Alex. Newsletter and resources for Solr beginners and intermediates: http://www.solr-start.com/ On 1 September 2016 at 20:58, Shawn Heisey wrote: > I was contacted on my Apache email address a few days ago by somebody at > IBM who wasn't exactly happy that my Solr wiki page recommends not using > their Java. > > https://wiki.apache.org/solr/ShawnHeisey > > He said that he'd like to address any problems. I replied that he > should join this list and work with those who really know about the > problems that we've encountered. That was three days ago and I haven't > seen anything here to indicate that my suggestion was followed. > > Before I drop his email address here without permission, I'd like to > know how to proceed. I'm not the right person on our end to discuss the > issue. > > I happened to notice SOLR-9179 a few minutes ago, where Solr tickles a > bug in IBM Java. Noble was able to implement a fix in our code. > > Thanks, > Shawn > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Contacted by IBM about problems with Lucene/Solr on their JVM
On Thu, Sep 1, 2016 at 9:58 AM, Shawn Heisey wrote: > I was contacted on my Apache email address a few days ago by somebody at > IBM who wasn't exactly happy that my Solr wiki page recommends not using > their Java. > > https://wiki.apache.org/solr/ShawnHeisey > > He said that he'd like to address any problems. I replied that he > should join this list and work with those who really know about the > problems that we've encountered. That was three days ago and I haven't > seen anything here to indicate that my suggestion was followed. > > Before I drop his email address here without permission, I'd like to > know how to proceed. I'm not the right person on our end to discuss the > issue. Without knowing more, it sounds like it was a personal email about their opinion vs your personal opinion (I assume they didn't say they were speaking for IBM?) I don't think there is any reason to share their email address, and any revising of your personal opinion (or the phrasing of it) is up to you. If someone were to ask my opinion on deployment platforms, I'd recommend the most commonly used/tested in order to minimize risk (i.e. x86_64 linux w/ 64 bit oracle/openJDK). -Yonik - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
[ https://issues.apache.org/jira/browse/SOLR-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455564#comment-15455564 ] Ganesh commented on SOLR-9466: -- Hi Shawn, Thanks for your reply. Regarding cache autowarm count, we have disable most of caches and for filter we have set autowarmcount as 0. Also for our tomcat maxThreads we have set it to 5000. Actually we are in the process of upgrading to new version and it's going on in our development environment. To validate our product in new version etc, that exercise will go for 3 to 4 weeks. But till that we need to support our production environment with 4.10.2 version. So we are looking for some help badly on this. Do you see increase in tomcat's maxthread from 5000 to 1 will help us over here ? Already we have set autowarmcount to zero. To give little background of our use case, our application can hit our solr server with almost 50 to 100 threads parallely for adding / updating the documents. I have pasted my solrconfig over here for reference. Let us know if any configuration change will help us to get rid out of this missing documents. LUCENE_42 ${solr.data.dir:} ${solr.lock.type:native} ${solr.ulog.dir:} 3 false 1000 1024 false 20 50 static firstSearcher warming in solrconfig.xml false 2 explicit 10 text explicit json true text true json true explicit velocity browse layout Solritas edismax text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4 title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0 text 100% *:* 10 *,score text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4 title^10.0 description^5.0 keywords^5.0 author^2.0 resourcename^1.0 text,features,name,sku,id,manu,cat,title,description,keywords,author,resourcename 3 on cat manu_exact content_type author_s ipod GB 1 cat,inStock after price 0 600 50 popularity 0 10 3 manufacturedate_dt NOW/YEAR-10YEARS NOW +1YEAR before after on content features title name html 0 title 0 name 3 200 content 750 on false 5 2 5 true true 5 3 spellcheck application/json application/csv true ignored_ true links ignored_ solrpingquery all explicit true textSpell default name solr.DirectSolrSpellChecker internal 0.5 2 1 5 4 0.01 wordbreak solr.WordBreakSolrSpellChecker name true true 10 text default wordbreak on true 10
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455532#comment-15455532 ] Shawn Heisey commented on LUCENE-7432: -- [~klangman] is the person at IBM that I just mentioned on the dev list. Kevin, glad to see that you're getting involved. > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) -
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455495#comment-15455495 ] Michael McCandless commented on LUCENE-7432: Thanks [~klangman], that sounds compelling. Maybe you can test internally if this APAR did in fact fix it. I also tested with {{-Xint}} and the test passes. > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sent by Atlassian JIRA (v6.3.4#
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455492#comment-15455492 ] Kevin Langman commented on LUCENE-7432: --- I will try to recreate the problem and see if this JIT fix applies. > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: d
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455488#comment-15455488 ] Kevin Langman commented on LUCENE-7432: --- I did just fix a JIT problem that was preventing finally blocks from executing when using the Java7 Multi-type catch block syntax. i.e. catch (BindException | NoRouteToHostException | PortUnreachableException e) The issue will occur when the following conditions are met: 1. A multi-type catch block is used to catch more than one type of exceptions. 2. An exception is thrown from the try block that matches anything but the first type in the multi-type catch. 3. An exception is thrown from the catch block. It can be a new exception, or the caught exception (re-thrown). 4. Some sort of control flow (i.e. if/else blocks) exist in the catcher. This fix will not be available until December in an official service pack or fix pack. We have an IBM APAR for this (IV88620) but it is not yet published as far as I can tell. A goggle search should find this APAR as some point soon I suspect. > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/co
Contacted by IBM about problems with Lucene/Solr on their JVM
I was contacted on my Apache email address a few days ago by somebody at IBM who wasn't exactly happy that my Solr wiki page recommends not using their Java. https://wiki.apache.org/solr/ShawnHeisey He said that he'd like to address any problems. I replied that he should join this list and work with those who really know about the problems that we've encountered. That was three days ago and I haven't seen anything here to indicate that my suggestion was followed. Before I drop his email address here without permission, I'd like to know how to proceed. I'm not the right person on our end to discuss the issue. I happened to notice SOLR-9179 a few minutes ago, where Solr tickles a bug in IBM Java. Noble was able to implement a fix in our code. Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+132) - Build # 17740 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17740/ Java: 64bit/jdk-9-ea+132 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:37165/c8n_1x3_lf_shard1_replica2] Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: No live SolrServers available to handle this request:[http://127.0.0.1:37165/c8n_1x3_lf_shard1_replica2] at __randomizedtesting.SeedInfo.seed([4223A543B0AB3F62:CA779A991E57529A]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:755) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6092 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6092/ Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseG1GC 5 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: ObjectTracker found 10 object(s) that were not released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, TransactionLog, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 10 object(s) that were not released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, TransactionLog, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([8D1D4826B214854E]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257) at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B214854E-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001: java.nio.file.FileSystemException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B214854E-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog\tlog.001: The process cannot access the file because it is being used by another process. C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B214854E-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B214854E-001\tempDir-001\node2\testschemaapi_shard1_replica1\data\tlog C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B214854E-001\tempDir-001\node2\testschemaapi_shard1_replica1\data: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.schema.TestManagedSchemaAPI_8D1D4826B
[JENKINS] Lucene-Solr-Tests-5.5 - Build # 5 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/5/ 1 tests failed. FAILED: org.apache.lucene.index.TestAllFilesCheckIndexHeader.test Error Message: file "_h.tvx" was already written to Stack Trace: java.io.IOException: file "_h.tvx" was already written to at __randomizedtesting.SeedInfo.seed([B232491ED9634EC5:3A6676C4779F233D]:0) at org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:558) at org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkOneFile(TestAllFilesCheckIndexHeader.java:111) at org.apache.lucene.index.TestAllFilesCheckIndexHeader.checkIndexHeader(TestAllFilesCheckIndexHeader.java:87) at org.apache.lucene.index.TestAllFilesCheckIndexHeader.test(TestAllFilesCheckIndexHeader.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 276 lines...] [junit4] Suite: org.apache.lucene.index.TestAllFilesCheckIndexHeader [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestAllFilesCheckIndexHeader -Dtests.method=test -Dtests.seed=B232491ED9634EC5 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it -Dtests.timezone=America/Jujuy -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 3.29s J2 | TestAllFilesCheckIndexHeader.test <<< [junit4]> Thro
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455419#comment-15455419 ] Michael McCandless commented on LUCENE-7432: Thanks [~klangman]. The issue is quite simple to reproduce. This should do it: {noformat} git clone https://git-wip-us.apache.org/repos/asf/lucene-solr cd lucene-solr/lucene/core ant test -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true -Dtests.file.encoding=UTF-8 {noformat} I will stress test this with Oracle's JVM to see if it's a problem with this test case or with Lucene. If it is a J9 issue, it seems like maybe some {{finally}} code is failing to run in some cases since {{IndexWriter}} does important things in these {{finally}} clauses (closing open file handles). > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), >
[jira] [Commented] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
[ https://issues.apache.org/jira/browse/SOLR-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455370#comment-15455370 ] Shawn Heisey commented on SOLR-9466: Situations like this should be brought up on the mailing list or the IRC channel before being opened in the bugtracker, so we can determine whether they are actually bugs. For your problem, which I do not believe is a bug: You have told Solr to soft commit after one second, but this doesn't mean that the commit will actually *complete* within one second -- only that it will *start* within one second. I've seen commits take a minute or more. Usually this is a misconfiguration, where cache autowarmCount values are too high, or there's not enough memory available. https://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits Another possible problem when using a third-party container like tomcat is that the container's maxThreads setting is too low. This setting defaults to 200, but in the Jetty that comes with Solr, it is set to 1, so that Solr can create as many threads as it needs. With high concurrency, the number of threads required can easily exceed 200. In the unlikely situation that this is a bug, we would need to see the bug demonstrated in a 6.x version of Solr. Bugs in 4.x are not going to be fixed, unless it's a MAJOR showstopper bug. > During concurrency some Solr document are not seen even after soft and hard > commit > -- > > Key: SOLR-9466 > URL: https://issues.apache.org/jira/browse/SOLR-9466 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 4.10.2 > Environment: Cent OS >Reporter: Ganesh >Priority: Critical > > Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in > each collection. > During concurrent usage of SOLR where both updates and search is sent to SOLR > server, some of our updates / adding of new documents are getting lost. > We could see that update hitting solr and we could see it in localhost_access > file of tomcat, also in catalina.out. But still we couldn't see that record > while searching. > Following is the catalina.out logs for the document which is getting indexed > properly. > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > processAdd > FINE: PRE_UPDATE > add{,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog > FINE: New TransactionLog > file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0220856, > exists=false, size=0, openExisting=false > Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit > FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ > retry:0 > add{_version_=1544254202941800448,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > params:update.distrib=FROMLEADER&distrib.from=http%3A%2F%2Fxx.xx.xx.xx%3A7070%2Fsolr%2FIOB_shard1_replica1%2F > Sep 01, 2016 7:39:31 AM > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run > FINE: starting runner: > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > FINE: PRE_UPDATE FINISH > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > Sep 01, 2016 7:39:31 AM > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run > FINE: finished: > org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 > Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > INFO: [IOB_shard1_replica1] webapp=/solr path=/update > params={crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} > > {add=[CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301 > (1544254202941800448)]} 0 9 > Sep 01, 2016 7:39:31 AM org.apache.solr.servlet.SolrDispatchFilter doFilter > FINE: Closing out SolrRequest: > {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} > For the one which document is not getting indexed, we could see only > following log in catalina.out. Not sure whether it's getting added to SOLR. > Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor > finish > FINE: PRE_UPDATE FINISH > {{params(crid=CUA
[jira] [Updated] (SOLR-5725) Efficient facets without counts for enum method
[ https://issues.apache.org/jira/browse/SOLR-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-5725: --- Attachment: SOLR-5725.patch Attached reworked patch for {{facet.exists=true}}. Please review! > Efficient facets without counts for enum method > --- > > Key: SOLR-5725 > URL: https://issues.apache.org/jira/browse/SOLR-5725 > Project: Solr > Issue Type: Improvement > Components: search >Reporter: Alexey Kozhemiakin >Assignee: Mikhail Khludnev > Fix For: master (7.0), 6.3 > > Attachments: SOLR-5725-5x.patch, SOLR-5725-master.patch, > SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, SOLR-5725.patch, > SOLR-5725.patch, SOLR-5725.patch > > > Shot version: > This improves performance for facet.method=enum when it's enough to know that > facet count>0, for example when you it's when you dynamically populate > filters on search form. New method checks if two bitsets intersect instead of > counting intersection size. > Long version: > We have a dataset containing hundreds of millions of records, we facet by > dozens of fields with many of facet-excludes and have relatively small number > of unique values in fields, around thousands. > Before executing search, users work with "advanced search" form, our goal is > to populate dozens of filters with values which are applicable with other > selected values, so basically this is a use case for facets with mincount=1, > but without need in actual counts. > Our performance tests showed that facet.method=enum works much better than > fc\fcs, probably due to a specific ratio of "docset"\"unique terms count". > For example average execution of query time with method fc=1500ms, fcs=2600ms > and with enum=280ms. Profiling indicated the majority time for enum was spent > on intersecting docsets. > Hers's a patch that introduces an extension to facet calculation for > method=enum. Basically it uses docSetA.intersects(docSetB) instead of > docSetA. intersectionSize (docSetB). > As a result we were able to reduce our average query time from 280ms to 60ms. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9142) JSON Facet, add hash table method for terms
[ https://issues.apache.org/jira/browse/SOLR-9142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455287#comment-15455287 ] Yonik Seeley commented on SOLR-9142: bq. Do you mean this?: Code that needs a fast set would be changed to work on a Bits interface, Yep... there are already too many places in the code that need/assume ordered sets. A utility method DocSetUtil.getBits(DocSet set)? could just unwrap BitDocSet if needed since OpenBitSet (err... FixedBitSet these days) implements Bits, or use a hash for SortedIntDocSet. bq. but the method-selection code doesn't conveniently have access to the Terms/DocValues to know the stats Yeah... we're going to have to figure out the best way to handle that. Oh, and as far as hashing, it will also make sense when using uif as well... I'll open a separate issue for that. > JSON Facet, add hash table method for terms > --- > > Key: SOLR-9142 > URL: https://issues.apache.org/jira/browse/SOLR-9142 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Reporter: Varun Thacker >Assignee: David Smiley > Fix For: 6.3 > > Attachments: SOLR_9412_FacetFieldProcessorByHashDV.patch, > SOLR_9412_FacetFieldProcessorByHashDV.patch, > SOLR_9412_FacetFieldProcessorByHashDV.patch, > SOLR_9412_FacetFieldProcessorByHashDV.patch, > SOLR_9412_FacetFieldProcessorByHashDV.patch > > > I indexed a dataset of 2M docs > {{top_facet_s}} has a cardinality of 1000 which is the top level facet. > For nested facets it has two fields {{sub_facet_unique_s}} and > {{sub_facet_unique_td}} which are string and double and have cardinality 2M > The nested query for the double field returns in the 1s mark always. The > nested query for the string field takes roughly 10s to execute. > {code:title=nested string facet|borderStyle=solid} > q=*:*&rows=0&json.facet= > { > "top_facet_s": { > "type": "terms", > "limit": -1, > "field": "top_facet_s", > "mincount": 1, > "excludeTags": "ANY", > "facet": { > "sub_facet_unique_s": { > "type": "terms", > "limit": 1, > "field": "sub_facet_unique_s", > "mincount": 1 > } > } > } > } > {code} > {code:title=nested double facet|borderStyle=solid} > q=*:*&rows=0&json.facet= > { > "top_facet_s": { > "type": "terms", > "limit": -1, > "field": "top_facet_s", > "mincount": 1, > "excludeTags": "ANY", > "facet": { > "sub_facet_unique_s": { > "type": "terms", > "limit": 1, > "field": "sub_facet_unique_td", > "mincount": 1 > } > } > } > } > {code} > I tried to dig deeper to understand why are string nested faceting that slow > compared to numeric field > Since the top facet has a cardinality of 1000 we have to calculate sub facets > on each of them. Now the key difference was in the implementation of the two . > For the string field, In {{FacetField#getFieldCacheCounts}} we call > {{createCollectAcc}} with nDocs=0 and numSlots=2M . This then initializes an > array of 2M. So we create a 2M array 1000 times for this one query which from > what I understand makes this query slow. > For numeric fields {{FacetFieldProcessorNumeric#calcFacets}} uses a > CountSlotAcc which doesn't assign a huge array. In this query it calls > {{createCollectAcc}} with numDocs=2k and numSlots=1024 . > In string faceting, we create the 2M array because the cardinality is 2M and > we use the array position as the ordinal and value as the count. If we could > improve on this it would speed things up significantly? For sub-facets we > know the maximum cardinality can be at max the top level bucket count. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
[ https://issues.apache.org/jira/browse/SOLR-9466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh updated SOLR-9466: - Description: Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in each collection. During concurrent usage of SOLR where both updates and search is sent to SOLR server, some of our updates / adding of new documents are getting lost. We could see that update hitting solr and we could see it in localhost_access file of tomcat, also in catalina.out. But still we couldn't see that record while searching. Following is the catalina.out logs for the document which is getting indexed properly. Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor processAdd FINE: PRE_UPDATE add{,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog FINE: New TransactionLog file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0220856, exists=false, size=0, openExisting=false Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ retry:0 add{_version_=1544254202941800448,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} params:update.distrib=FROMLEADER&distrib.from=http%3A%2F%2Fxx.xx.xx.xx%3A7070%2Fsolr%2FIOB_shard1_replica1%2F Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run FINE: starting runner: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish FINE: PRE_UPDATE FINISH {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run FINE: finished: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} {add=[CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301 (1544254202941800448)]} 0 9 Sep 01, 2016 7:39:31 AM org.apache.solr.servlet.SolrDispatchFilter doFilter FINE: Closing out SolrRequest: {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} For the one which document is not getting indexed, we could see only following log in catalina.out. Not sure whether it's getting added to SOLR. Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish FINE: PRE_UPDATE FINISH {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102} {} 0 1 Sep 01, 2016 7:39:56 AM org.apache.solr.servlet.SolrDispatchFilter doFilter FINE: Closing out SolrRequest: {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} We have set autosoftcommit to 1 seconds and autohardcommit to 30 seconds. We are not getting any errors or exceptions in the log. was: Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in each collection. During concurrent usage of SOLR where both updates and search is sent to SOLR server, some of our updates / adding of new documents are getting lost. We could see that update hitting solr and we could see it in localhost_access file of tomcat, also in catalina.out. But still we couldn't see that record while searching. Following is the catalina.out logs for the document which is getting indexed properly. Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor processAdd FINE: PRE_UPDATE add{,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog FINE: New TransactionLog file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0220856, exists=false, size=0, openExisting=false Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ retry:0 add{_version_
[jira] [Created] (SOLR-9466) During concurrency some Solr document are not seen even after soft and hard commit
Ganesh created SOLR-9466: Summary: During concurrency some Solr document are not seen even after soft and hard commit Key: SOLR-9466 URL: https://issues.apache.org/jira/browse/SOLR-9466 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Affects Versions: 4.10.2 Environment: Cent OS Reporter: Ganesh Priority: Critical Solr cloud with 2 nodes, master master, with 5 collection and 2 shards in each collection. During concurrent usage of SOLR where both updates and search is sent to SOLR server, some of our updates / adding of new documents are getting lost. We could see that update hitting solr and we could see it in localhost_access file of tomcat, also in catalina.out. But still we couldn't see that record while searching. Following is the catalina.out logs for the document which is getting indexed properly. Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor processAdd FINE: PRE_UPDATE add{,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} Sep 01, 2016 7:39:31 AM org.apache.solr.update.TransactionLog FINE: New TransactionLog file=/ebdata2/solrdata/IOB_shard1_replica1/data/tlog/tlog.0220856, exists=false, size=0, openExisting=false Sep 01, 2016 7:39:31 AM org.apache.solr.update.SolrCmdDistributor submit FINE: sending update to http://xx.xx.xx.xx:7070/solr/IOB_shard1_replica2/ retry:0 add{_version_=1544254202941800448,id=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} params:update.distrib=FROMLEADER&distrib.from=http%3A%2F%2Fxx.xx.xx.xx%3A7070%2Fsolr%2FIOB_shard1_replica1%2F Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run FINE: starting runner: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish FINE: PRE_UPDATE FINISH {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} Sep 01, 2016 7:39:31 AM org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner run FINE: finished: org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner@3fb794b2 Sep 01, 2016 7:39:31 AM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301} {add=[CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301 (1544254202941800448)]} 0 9 Sep 01, 2016 7:39:31 AM org.apache.solr.servlet.SolrDispatchFilter doFilter FINE: Closing out SolrRequest: {{params(crid=CUA00439019223370564139207241C3LEA020769223370567404392838EXCC301),defaults(wt=xml)}} For the one which document is not getting indexed, we could only following log in catalina.out Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish FINE: PRE_UPDATE FINISH {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} Sep 01, 2016 7:39:56 AM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [IOB_shard1_replica1] webapp=/solr path=/update params={crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102} {} 0 1 Sep 01, 2016 7:39:56 AM org.apache.solr.servlet.SolrDispatchFilter doFilter FINE: Closing out SolrRequest: {{params(crid=CUA00439019223370564139182810C3LEA020179223370567061972057EXCC102),defaults(wt=xml)}} We have set autosoftcommit to 1 seconds and autohardcommit to 30 seconds. We are getting any errors or exceptions in the log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9461) DELETENODE, REPLACENODE should pass down the 'async' param to subcommands
[ https://issues.apache.org/jira/browse/SOLR-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455249#comment-15455249 ] ASF subversion and git services commented on SOLR-9461: --- Commit e0e72e64f27d28dff56f8124a8ec54d417164f55 in lucene-solr's branch refs/heads/branch_6x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0e72e6 ] SOLR-9461: DELETENODE, REPLACENODE should pass down the 'async' param to subcommands > DELETENODE, REPLACENODE should pass down the 'async' param to subcommands > -- > > Key: SOLR-9461 > URL: https://issues.apache.org/jira/browse/SOLR-9461 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > The {{async}} param is used to make async calls to core admin -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455246#comment-15455246 ] Kevin Langman commented on LUCENE-7432: --- Can you tell me how to recreate the issue, or if that is to complex would it be possible to have you recreate it using a JVM option that will generate a system dump when then exception is thrown? > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sen
[jira] [Commented] (SOLR-9461) DELETENODE, REPLACENODE should pass down the 'async' param to subcommands
[ https://issues.apache.org/jira/browse/SOLR-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455244#comment-15455244 ] ASF subversion and git services commented on SOLR-9461: --- Commit e13f7aeafadb56bbf138213865e0d2bf4cd423b2 in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e13f7ae ] SOLR-9461: DELETENODE, REPLACENODE should pass down the 'async' param to subcommands > DELETENODE, REPLACENODE should pass down the 'async' param to subcommands > -- > > Key: SOLR-9461 > URL: https://issues.apache.org/jira/browse/SOLR-9461 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > The {{async}} param is used to make async calls to core admin -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 2 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/2/ 10 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([9B3584A7695CFF52]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([9B3584A7695CFF52]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: Captured an uncaught exception in thread: Thread[id=13683, name=collection0, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=13683, name=collection0, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46953: collection already exists: awholynewstresscollection_collection0_5 at __randomizedtesting.SeedInfo.seed([9B3584A7695CFF52]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1575) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1596) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:984) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: Captured an uncaught exception in thread: Thread[id=13688, name=collection5, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=13688, name=collection5, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46953: collection already exists: awholynewstresscollection_collection5_5 at __randomizedtesting.SeedInfo.seed([9B3584A7695CFF52]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1575) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1596) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:984) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: Captured an uncaught exception in thread: Thread[id=13685, name=collection2, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=13685, n
[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.8.0_102) - Build # 362 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/362/ Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery Error Message: ObjectTracker found 4 object(s) that were not released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor] Stack Trace: java.lang.AssertionError: ObjectTracker found 4 object(s) that were not released!!! [SolrCore, MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor] at __randomizedtesting.SeedInfo.seed([53813ED6E4942F74]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreDiscovery Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestCoreDiscovery: 1) Thread[id=79120, name=searcherExecutor-8492-thread-1, state=WAITING, group=TGRP-TestCoreDiscovery] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestCoreDiscovery: 1) Thread[id=79120, name=searcherExecutor-8492-thread-1, state=WAITING, group=TGRP-TestCoreDiscovery] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745
[jira] [Updated] (SOLR-9381) Snitch for freedisk uses root path not Solr home
[ https://issues.apache.org/jira/browse/SOLR-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Owen updated SOLR-9381: --- Attachment: SOLR-9381.patch > Snitch for freedisk uses root path not Solr home > > > Key: SOLR-9381 > URL: https://issues.apache.org/jira/browse/SOLR-9381 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.1, master (7.0) >Reporter: Tim Owen >Assignee: Noble Paul > Attachments: SOLR-9381.patch, SOLR-9381.patch > > > The path used for the freedisk snitch value is hardcoded to / whereas it > should be using Solr home. It's fairly common to use hardware for Solr with > multiple physical disks on different mount points, with multiple Solr > instances running on the box, each pointing its Solr home to a different > disk. In this case, the value reported for the freedisk snitch value is > wrong, because it's based on the root filesystem space. > Patch changes this to use solr home from the CoreContainer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7429) DelegatingAnalyzerWrapper should delegate normalization too
[ https://issues.apache.org/jira/browse/LUCENE-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455008#comment-15455008 ] Adrien Grand commented on LUCENE-7429: -- bq. The issue here is mostly that we need to create a new TokenStream (StringTokenStream) for the normalization and we need to use the same attribute types. Exactly. For instance if a term attribute produces utf-16 encoded tokens, bq. Although this is sometimes broken for use-cases, where TokenStreams create binary tokens. But those would never be normalized, I think (!?) Do you mean that you cannot think of any use-case for using both a non-default term attribute and token filters in the same analysis chain? I am wondering about CJK analyzers since I think UTF16 stores CJK characters a bit more efficiently on average than UTF8 (I may be completely wrong, in which case please let me know) so users might be tempted to use a different term attribute impl? > DelegatingAnalyzerWrapper should delegate normalization too > --- > > Key: LUCENE-7429 > URL: https://issues.apache.org/jira/browse/LUCENE-7429 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 6.2 >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7355.patch, LUCENE-7429.patch, LUCENE-7429.patch > > > This is something that I overlooked in LUCENE-7355: > (Delegating)AnalyzerWrapper uses the default implementation of > initReaderForNormalization and normalize, meaning that by default the > normalization is a no-op. It should delegate to the wrapped analyzer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 822 - Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/822/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZk2Test.test Error Message: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:39555 within 3 ms Stack Trace: org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:39555 within 3 ms at __randomizedtesting.SeedInfo.seed([6E953035E71A70BF:E6C10FEF49E61D47]:0) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:181) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:97) at org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:295) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1500) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:962) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 127.0.0.1:39555 within 3 ms at org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:235) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:173) ... 37 more FAILED: junit.framework.TestSuite.org.apache.solr.cloud.BasicDistributedZk2Test Error
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 17739 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17739/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:40652/av_/sd/c8n_1x3_lf_shard1_replica2] Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:40652/av_/sd/c8n_1x3_lf_shard1_replica2] at __randomizedtesting.SeedInfo.seed([CB039F029D7B694A:4357A0D8338704B2]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:769) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1161) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1050) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:992) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:753) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.sendDocsWithRetry(AbstractFullDistribZkTestBase.java:741) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:178) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:57) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.ran
[jira] [Closed] (SOLR-9465) When creating collection with basic authentication enabled, some nodes get in recovery mode and are inaccessible
[ https://issues.apache.org/jira/browse/SOLR-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Ioannidis closed SOLR-9465. --- Resolution: Duplicate > When creating collection with basic authentication enabled, some nodes get in > recovery mode and are inaccessible > > > Key: SOLR-9465 > URL: https://issues.apache.org/jira/browse/SOLR-9465 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 5.5.2, 6.2 > Environment: Centos 7 > Java 8 > Zookeeper 3.4.8 >Reporter: Michael Ioannidis >Assignee: Noble Paul >Priority: Critical > Labels: newbie, security > > Without Basic Authentication enabled, we are able to create as many > collections we can. > Once we enable it, in every collection we create, 2 out of 3 nodes get locked > and the collection is not accessible from the API. The leader is the one who > stays active. The rest of the nodes are at first in recovery state and then > to down state. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.5-Linux (32bit/jdk1.7.0_80) - Build # 361 - Still Unstable!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/361/ Java: 32bit/jdk1.7.0_80 -client -XX:+UseG1GC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud Error Message: ObjectTracker found 4 object(s) that were not released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MDCAwareThreadPoolExecutor, MockDirectoryWrapper] Stack Trace: java.lang.AssertionError: ObjectTracker found 4 object(s) that were not released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MDCAwareThreadPoolExecutor, MockDirectoryWrapper] at __randomizedtesting.SeedInfo.seed([1918B175AA10C50C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:238) at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.lucene.spatial.util.TestGeoUtils.testGeoRelations Error Message: 1 incorrect hits (see above) Stack Trace: java.lang.AssertionError: 1 incorrect hits (see above) at __randomizedtesting.SeedInfo.seed([393E79DA9C1D8F90:FB1D6D6FE8EAF92E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.lucene.spatial.util.TestGeoUtils.testGeoRelations(TestGeoUtils.java:543) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.Th
[jira] [Updated] (LUCENE-7432) TestIndexWriterOnError.testCheckpoint fails on IBM J9
[ https://issues.apache.org/jira/browse/LUCENE-7432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-7432: --- Labels: IBM-J9 (was: ) > TestIndexWriterOnError.testCheckpoint fails on IBM J9 > - > > Key: LUCENE-7432 > URL: https://issues.apache.org/jira/browse/LUCENE-7432 > Project: Lucene - Core > Issue Type: Bug >Reporter: Michael McCandless > Labels: IBM-J9 > > Not sure if this is a J9 issue or a Lucene issue, but using this version of > J9: > {noformat} > 09:26 $ java -version > java version "1.8.0" > Java(TM) SE Runtime Environment (build pxa6480sr3fp10-20160720_02(SR3fp10)) > IBM J9 VM (build 2.8, JRE 1.8.0 Linux amd64-64 Compressed References > 20160719_312156 (JIT enabled, AOT enabled) > J9VM - R28_Java8_SR3_20160719_1144_B312156 > JIT - tr.r14.java_20160629_120284.01 > GC - R28_Java8_SR3_20160719_1144_B312156_CMPRSS > J9CL - 20160719_312156) > JCL - 20160719_01 based on Oracle jdk8u101-b13 > {noformat} > This test failure seems to reproduce: > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestIndexWriterOnVMError -Dtests.method=testCheckpoint > -Dtests.seed=FAB0DC147AFDBF4E -Dtests.nightly=true -Dtests.slow=true > -Dtests.locale=kn -Dtests.timezone=Australia/South -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 >[junit4] ERROR196s | TestIndexWriterOnVMError.testCheckpoint <<< >[junit4]> Throwable #1: java.lang.RuntimeException: > MockDirectoryWrapper: cannot close: there are still 9 open files: > {_2_Asserting_0.pos=1, _2_Asserting_0.dvd=1, _2.fdt=1, _2_Asserting_0.doc=1, > _2_Asserting_0.tim=1, _2.nvd=1, _2.tvd=1, _3.cfs=1, _2.dim=1} >[junit4]> at > __randomizedtesting.SeedInfo.seed([FAB0DC147AFDBF4E:FBA18A7C5B16548D]:0) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.testCheckpoint(TestIndexWriterOnVMError.java:280) >[junit4]> at java.lang.Thread.run(Thread.java:785) >[junit4]> Caused by: java.lang.RuntimeException: unclosed IndexInput: > _2.dim >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) >[junit4]> at > org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsReader.(Lucene60PointsReader.java:85) >[junit4]> at > org.apache.lucene.codecs.lucene60.Lucene60PointsFormat.fieldsReader(Lucene60PointsFormat.java:104) >[junit4]> at > org.apache.lucene.codecs.asserting.AssertingPointsFormat.fieldsReader(AssertingPointsFormat.java:66) >[junit4]> at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:128) >[junit4]> at > org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145) >[junit4]> at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:197) >[junit4]> at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:460) >[junit4]> at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) >[junit4]> at > org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:175) >[junit4]> ... 37 more >[junit4] 2> NOTE: leaving temporary files on disk at: > /l/trunk/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriterOnVMError_FAB0DC147AFDBF4E-001 >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene62), > sim=ClassicSimilarity, locale=kn, timezone=Australia/South >[junit4] 2> NOTE: Linux 4.4.0-34-generic amd64/IBM Corporation 1.8.0 > (64-bit)/cpus=8,threads=1,free=55483576,total=76742656 >[junit4] 2> NOTE: All tests run in this JVM: [TestIndexWriterOnVMError] > {noformat} > The test is quite stressful, provoking "unexpected" exceptions at tricky > times for {{IndexWriter}}. > When I run with Oracle's 1.8.0_101 with that same "reproduce with", the test > passes. > I see a similar failure for {{testUnknownError}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache
[jira] [Commented] (LUCENE-7407) Explore switching doc values to an iterator API
[ https://issues.apache.org/jira/browse/LUCENE-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454643#comment-15454643 ] Michael McCandless commented on LUCENE-7407: Sorry, I don't think so [~otis]: this is a major change, I think it will only be for Lucene 7.0? > Explore switching doc values to an iterator API > --- > > Key: LUCENE-7407 > URL: https://issues.apache.org/jira/browse/LUCENE-7407 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Labels: docValues > > I think it could be compelling if we restricted doc values to use an > iterator API at read time, instead of the more general random access > API we have today: > * It would make doc values disk usage more of a "you pay for what > what you actually use", like postings, which is a compelling > reduction for sparse usage. > * I think codecs could compress better and maybe speed up decoding > of doc values, even in the non-sparse case, since the read-time > API is more restrictive "forward only" instead of random access. > * We could remove {{getDocsWithField}} entirely, since that's > implicit in the iteration, and the awkward "return 0 if the > document didn't have this field" would go away. > * We can remove the annoying thread locals we must make today in > {{CodecReader}}, and close the trappy "I accidentally shared a > single XXXDocValues instance across threads", since an iterator is > inherently "use once". > * We could maybe leverage the numerous optimizations we've done for > postings over time, since the two problems ("iterate over doc ids > and store something interesting for each") are very similar. > This idea has come up many in the past, e.g. LUCENE-7253 is a recent > example, and very early iterations of doc values started with exactly > this ;) > However, it's a truly enormous change, likely 7.0 only. Or maybe we > could have the new iterator APIs also ported to 6.x side by side with > the deprecate existing random-access APIs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org