[jira] [Commented] (SOLR-10107) CdcrReplicationDistributedZkTest fails far too often and is an extremely expensive test, even when compared to other nightlies.
[ https://issues.apache.org/jira/browse/SOLR-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859168#comment-15859168 ] ASF subversion and git services commented on SOLR-10107: Commit aa20136bb1cfce195a417d576aa3dc4e578413d4 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa20136 ] SOLR-10107: Add @BadApple > CdcrReplicationDistributedZkTest fails far too often and is an extremely > expensive test, even when compared to other nightlies. > --- > > Key: SOLR-10107 > URL: https://issues.apache.org/jira/browse/SOLR-10107 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Critical > > This is a Nightly test. > During beasting this test takes 30 minutes per run. The next closest is 10 > minutes. > In the 3 beast test reports I've done, it failed 37%, 20%, and 43% of the > time. > I'm going to @BadApple this test, it's extremely heavy and out of line with > the other tests it's in line with and can't survive any kind of test beasting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10107) CdcrReplicationDistributedZkTest fails far too often and is an extremely expensive test, even when compared to other nightlies.
[ https://issues.apache.org/jira/browse/SOLR-10107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859165#comment-15859165 ] Mark Miller commented on SOLR-10107: I think ideally we would somehow split this test up or heavily reduce its load or resource usage. In my test beasting, this test takes 3x longer than our other largest nightly tests. It should be brought in line with our other expensive nighties or become a weekly. First it has to be hardened through. I'm going to @BadApple it for now - it's results will still show up in my test reports in SOLR-10032. > CdcrReplicationDistributedZkTest fails far too often and is an extremely > expensive test, even when compared to other nightlies. > --- > > Key: SOLR-10107 > URL: https://issues.apache.org/jira/browse/SOLR-10107 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Critical > > This is a Nightly test. > During beasting this test takes 30 minutes per run. The next closest is 10 > minutes. > In the 3 beast test reports I've done, it failed 37%, 20%, and 43% of the > time. > I'm going to @BadApple this test, it's extremely heavy and out of line with > the other tests it's in line with and can't survive any kind of test beasting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10021) Cannot reload a core if it fails initialization.
[ https://issues.apache.org/jira/browse/SOLR-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859147#comment-15859147 ] Mike Drob commented on SOLR-10021: -- I have a rough outline of a patch for this, still needs unit tests though. Will try to upload something by the end of the week. > Cannot reload a core if it fails initialization. > > > Key: SOLR-10021 > URL: https://issues.apache.org/jira/browse/SOLR-10021 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > Once a core initialization fails, all calls to CoreContainer.getCore() throw > an error forever, including the core admin RELOAD command. > I think that RELOAD (and only RELOAD) should go ahead even after > initialization failure since it is, after all, reloading everything. For any > other core ops since you don't know why the core load failed in the first > place you couldn't rely on the state of the core to try to do anything so > failing is appropriate. > However, the current structure of the code needs a SolrCore to get the > CoreDescriptor which you need to have to, well, reload the core. The work on > SOLR-10007 and associated JIRAs _should_ make it possible to get the > CoreDescriptor without having to have a core already. Once that's possible, > RELOAD will have to distinguish between having a SolrCore already and using > the present reload() method or creating a new core. > We could also consider a new core admin API command. It's always bugged me > that there's an UNLOAD but no LOAD, we've kinda, sorta, maybe been able to > use CREATE. > I think I like making RELOAD smarter though. Consider the scenario where you > make a config change that you mess up. You'd have to change to LOAD when > RELOAD failed. I can be convinced otherwise though. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-10032: --- Description: We have many Jenkins instances blasting tests, some official, some policeman, I and others have or had their own, and the email trail proves the power of the Jenkins cluster to find test fails. However, I still have a very hard time with some basic questions: what tests are flakey right now? which test fails actually affect devs most? did I break it? was that test already flakey? is that test still flakey? what are our worst tests right now? is that test getting better or worse? We really need a way to see exactly what tests are the problem, not because of OS or environmental issues, but more basic test quality issues. Which tests are flakey and how flakey are they at any point in time. Reports: 01/24/2017 - https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing 02/01/2017 - https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing was: We have many Jenkins instances blasting tests, some official, some policeman, I and others have or had their own, and the email trail proves the power of the Jenkins cluster to find test fails. However, I still have a very hard time with some basic questions: what tests are flakey right now? which test fails actually affect devs most? did I break it? was that test already flakey? is that test still flakey? what are our worst tests right now? is that test getting better or worse? We really need a way to see exactly what tests are the problem, not because of OS or environmental issues, but more basic test quality issues. Which tests are flakey and how flakey are they at any point in time. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.5-Windows (32bit/jdk1.7.0_80) - Build # 133 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Windows/133/ Java: 32bit/jdk1.7.0_80 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication Error Message: [C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\index.20170209010626201, C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\index.20170209010626547, C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\] expected:<2> but was:<3> Stack Trace: java.lang.AssertionError: [C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\index.20170209010626201, C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\index.20170209010626547, C:\Users\jenkins\workspace\Lucene-Solr-5.5-Windows\solr\build\solr-core\test\J0\temp\solr.handler.TestReplicationHandler_1D55CD45350242B3-001\solr-instance-012\.\collection1\data\] expected:<2> but was:<3> at __randomizedtesting.SeedInfo.seed([1D55CD45350242B3:EA26231DF3EAED55]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:898) at org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1328) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at
[jira] [Commented] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents
[ https://issues.apache.org/jira/browse/SOLR-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859124#comment-15859124 ] Mike Drob commented on SOLR-9956: - Erick - the stack trace is included in the initial bug report. Looks like the culprit is something in DocValuesStats, but that code hasn't changed for years. {noformat} java.lang.ArrayIndexOutOfBoundsException: 28 at org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213) at org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135) at org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424) at org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213) {noformat} True, there might be more in the solr logs, but this is definitely enough to start with. (I, unfortunately, have no idea what's going on in this code) > Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of > documents > > > Key: SOLR-9956 > URL: https://issues.apache.org/jira/browse/SOLR-9956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.2.1, 6.3 > Environment: Ubuntu 14.04.4 LTS >Reporter: Zhu JiaJun >Priority: Critical > Labels: query, solr, stats > > I'm using solr 6.3.0. I indexed a big amount of docuements into one solr > collection with one shard, it's 60G in the disk, which has around 2506889 > documents. > I frequently get the ArrayIndexOutOfBoundsException when I send a simple > stats request, for example: > http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet > The solr log capture following exception as well as in the response like > below: > {code} > { > "responseHeader": { > "zkConnected": true, > "status": 500, > "QTime": 11, > "params": { > "q": "*:*", > "stats": "true", > "start": "0", > "timeAllowed": "6", > "rows": "0", > "version": "2.2", > "wt": "json", > "stats.field": [ > "asp_community_facet", > "asp_group_facet" > ] > } > }, > "response": { > "numFound": 2506211, > "start": 0, > "docs": [ ] > }, > "error": { > "msg": "28", > "trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat > org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat > > org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat > > org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat > > org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat > > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat > org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat > >
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+155) - Build # 18929 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18929/ Java: 64bit/jdk-9-ea+155 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.handler.admin.TestApiFramework.testFramework Error Message: Stack Trace: java.lang.ExceptionInInitializerError at __randomizedtesting.SeedInfo.seed([FE430D3F2A2802F6:E935C7182CFCEECB]:0) at net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166) at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25) at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216) at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104) at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69) at org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259) at org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174) at org.easymock.internal.MocksControl.createMock(MocksControl.java:60) at org.easymock.EasyMock.createMock(EasyMock.java:104) at org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:76) at org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Commented] (SOLR-9217) {!join score=..}.. should delay join to createWeight
[ https://issues.apache.org/jira/browse/SOLR-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15859114#comment-15859114 ] Sachini Malindi commented on SOLR-9217: --- Can i look at this issue > {!join score=..}.. should delay join to createWeight > > > Key: SOLR-9217 > URL: https://issues.apache.org/jira/browse/SOLR-9217 > Project: Solr > Issue Type: Improvement > Components: query parsers >Affects Versions: 6.1 >Reporter: Mikhail Khludnev >Priority: Minor > Labels: newbie, newdev > > {{ScoreJoinQParserPlugin.XxxCoreJoinQuery}} executes > {{JoinUtil.createJoinQuery}} on {{rewrite()}}, but it's inefficient in > {{filter(...)}} syntax or fq. It's better to do that in {{createWeigh()}} as > it's done in classic Solr {{JoinQuery}}, {{JoinQParserPlugin}}. > All existing tests is enough, we just need to assert rewrite behavior - it > should rewrite on enclosing range query or so, and doesn't on plain term > query. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Datatype change in solr
On 2/8/2017 9:33 PM, Manjunath N S (mans3) wrote: > I had defined one of my field as string and indexed the data but it is > of type integer. Now when I try to change the field type as tint ,to > allow sorting to be performed on that field ,I am getting Async > distributed error. > > I have deleted the document with that id. > > > > Is there a way to change the field types without deleting the index ? > This question is more appropriate for the solr-user list. I will respond, but if this discussion needs to continue, it will need to be moved to the other list. When you delete a document, it doesn't actually get deleted. That record in the index is *marked* as deleted, but it still exists until the index segment that contains it is merged into a new segment. When you change the data type on a field, you must typically delete the entire index and rebuild it from scratch. If this is not done, then Solr will try to interpret the data saved in the index with the old configuration according to the new configuration, which frequently will result in exceptions. https://wiki.apache.org/solr/HowToReindex Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Datatype change in solr
Hello, I had defined one of my field as string and indexed the data but it is of type integer. Now when I try to change the field type as tint ,to allow sorting to be performed on that field ,I am getting Async distributed error. I have deleted the document with that id. Is there a way to change the field types without deleting the index ? Thanks, manjunath
[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 717 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/717/ Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([EE1D4CF11451C5A4:2B0B886A04E7FDC4]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11695 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.admin.MBeansHandlerTest_EE1D4CF11451C5A4-001\init-core-data-001
[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2821 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2821/ Java: 64bit/jdk-9-ea+155 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.handler.admin.TestApiFramework.testFramework Error Message: Could not initialize class org.easymock.internal.ClassProxyFactory$2 Stack Trace: java.lang.NoClassDefFoundError: Could not initialize class org.easymock.internal.ClassProxyFactory$2 at __randomizedtesting.SeedInfo.seed([F31A272D1D74593F:E46CED0A1BA0B502]:0) at org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259) at org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174) at org.easymock.internal.MocksControl.createMock(MocksControl.java:60) at org.easymock.EasyMock.createMock(EasyMock.java:104) at org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:76) at org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated LUCENE-7662: -- Attachment: LUCENE-7662.patch Thanks. That is frustrating. I ran it 10 times and somehow never hit that or a similar seed. When the test uses the compound format, since there is no {{.doc}} file to remove, the index doesn't get corrupted and correctly never throws the exception. I couldn't figure out how to disable compound format from the test, so instead we can attempt to delete the doc or the {{.cfe}} file. I also made a change to check that we do delete something, otherwise the index would never be corrupt here. Since I can't imagine all possible future index file layouts, this seems prudent. > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch, LUCENE-7662.patch, LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at > org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238) > at > org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192) > at > org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372) > at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109) > at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) > at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143) > at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195) > at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) > at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473) > at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) > at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79) > at > org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39) > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.5 - Build # 13 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/13/ 2 tests failed. FAILED: org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay Error Message: Could not find collection : c1 Stack Trace: org.apache.solr.common.SolrException: Could not find collection : c1 at __randomizedtesting.SeedInfo.seed([601EE7047ECF1D19:1F80508117AD3093]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:170) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129) at org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay(ZkStateReaderTest.java:52) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefresh Error Message: Could not find collection : c1 Stack Trace: org.apache.solr.common.SolrException: Could
[jira] [Commented] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents
[ https://issues.apache.org/jira/browse/SOLR-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858956#comment-15858956 ] Zhu JiaJun commented on SOLR-9956: -- Hi Erick, I uploaded the dump to google drive that might be helpful for you to download: https://drive.google.com/file/d/0Bx-GgfxzFCjteGVwN2sxTVd1bFU/view?usp=sharing JiaJun > Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of > documents > > > Key: SOLR-9956 > URL: https://issues.apache.org/jira/browse/SOLR-9956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.2.1, 6.3 > Environment: Ubuntu 14.04.4 LTS >Reporter: Zhu JiaJun >Priority: Critical > Labels: query, solr, stats > > I'm using solr 6.3.0. I indexed a big amount of docuements into one solr > collection with one shard, it's 60G in the disk, which has around 2506889 > documents. > I frequently get the ArrayIndexOutOfBoundsException when I send a simple > stats request, for example: > http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet > The solr log capture following exception as well as in the response like > below: > {code} > { > "responseHeader": { > "zkConnected": true, > "status": 500, > "QTime": 11, > "params": { > "q": "*:*", > "stats": "true", > "start": "0", > "timeAllowed": "6", > "rows": "0", > "version": "2.2", > "wt": "json", > "stats.field": [ > "asp_community_facet", > "asp_group_facet" > ] > } > }, > "response": { > "numFound": 2506211, > "start": 0, > "docs": [ ] > }, > "error": { > "msg": "28", > "trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat > org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat > > org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat > > org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat > > org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat > > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat > org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat > > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat > > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat > org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)\n\tat > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)\n\tat > > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)\n\tat > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)\n\tat > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat > >
[jira] [Comment Edited] (SOLR-9956) Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of documents
[ https://issues.apache.org/jira/browse/SOLR-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829315#comment-15829315 ] Zhu JiaJun edited comment on SOLR-9956 at 2/9/17 3:48 AM: -- Hi Erick, Thanks for response. I created a java heap dump of the server after the exception throw. It's a bit big. I uploaded it to my baidu cloud. You can download it by clicking below link: https://drive.google.com/file/d/0Bx-GgfxzFCjteGVwN2sxTVd1bFU/view?usp=sharing JiaJun was (Author: jiajun): Hi Erick, Thanks for response. I created a java heap dump of the server after the exception throw. It's a bit big. I uploaded it to my baidu cloud. You can download it by clicking below link: https://pan.baidu.com/s/1mh9ymjm Please click on "下载" button to download the file. JiaJun > Solr java.lang.ArrayIndexOutOfBoundsException when indexed large amount of > documents > > > Key: SOLR-9956 > URL: https://issues.apache.org/jira/browse/SOLR-9956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.2.1, 6.3 > Environment: Ubuntu 14.04.4 LTS >Reporter: Zhu JiaJun >Priority: Critical > Labels: query, solr, stats > > I'm using solr 6.3.0. I indexed a big amount of docuements into one solr > collection with one shard, it's 60G in the disk, which has around 2506889 > documents. > I frequently get the ArrayIndexOutOfBoundsException when I send a simple > stats request, for example: > http://localhost:8983/solr/staging-update/select?start=0=0=2.2=*:*=true=6=json=asp_community_facet=asp_group_facet > The solr log capture following exception as well as in the response like > below: > {code} > { > "responseHeader": { > "zkConnected": true, > "status": 500, > "QTime": 11, > "params": { > "q": "*:*", > "stats": "true", > "start": "0", > "timeAllowed": "6", > "rows": "0", > "version": "2.2", > "wt": "json", > "stats.field": [ > "asp_community_facet", > "asp_group_facet" > ] > } > }, > "response": { > "numFound": 2506211, > "start": 0, > "docs": [ ] > }, > "error": { > "msg": "28", > "trace": "java.lang.ArrayIndexOutOfBoundsException: 28\n\tat > org.apache.solr.request.DocValuesStats.accumMulti(DocValuesStats.java:213)\n\tat > > org.apache.solr.request.DocValuesStats.getCounts(DocValuesStats.java:135)\n\tat > > org.apache.solr.handler.component.StatsField.computeLocalStatsValues(StatsField.java:424)\n\tat > > org.apache.solr.handler.component.StatsComponent.process(StatsComponent.java:58)\n\tat > > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat > org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat > > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat > > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat > org.eclipse.jetty.server.Server.handle(Server.java:518)\n\tat >
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6383 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6383/ Java: 32bit/jdk1.8.0_121 -client -XX:+UseG1GC All tests passed Build Log: [...truncated 13480 lines...] [junit4] JVM J0: stdout was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\test\temp\junit4-J0-20170209_033006_8265083955227108470926.sysout [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x019a4588, pid=2996, tid=0x0a3c [junit4] # [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_121-b13) (build 1.8.0_121-b13) [junit4] # Java VM: Java HotSpot(TM) Client VM (25.121-b13 mixed mode windows-x86 ) [junit4] # Problematic frame: [junit4] # j org.eclipse.jetty.io.ByteBufferPool$Bucket.queuePoll()Ljava/nio/ByteBuffer;+15 [junit4] # [junit4] # Failed to write core dump. Minidumps are not enabled by default on client versions of Windows [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\test\J0\hs_err_pid2996.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J0: EOF [...truncated 182 lines...] [junit4] ERROR: JVM J0 ended with an exception, command line: C:\Users\jenkins\tools\java\32bit\jdk1.8.0_121\jre\bin\java.exe -client -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps -ea -esa -Dtests.prefix=tests -Dtests.seed=849375A014ED08F0 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=7.0.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp -Djava.io.tmpdir=./temp -Djunit4.tempDir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\test\temp -Dcommon.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene -Dclover.db.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\clover\db -Djava.security.policy=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\solr-tests.policy -Dtests.LUCENE_VERSION=7.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows -Djunit4.childvm.cwd=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-solrj\test\J0 -Djunit4.childvm.id=0 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dfile.encoding=UTF-8 -classpath
[jira] [Created] (SOLR-10107) CdcrReplicationDistributedZkTest fails far too often and is an extremely expensive test, even when compared to other nightlies.
Mark Miller created SOLR-10107: -- Summary: CdcrReplicationDistributedZkTest fails far too often and is an extremely expensive test, even when compared to other nightlies. Key: SOLR-10107 URL: https://issues.apache.org/jira/browse/SOLR-10107 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Reporter: Mark Miller Priority: Critical This is a Nightly test. During beasting this test takes 30 minutes per run. The next closest is 10 minutes. In the 3 beast test reports I've done, it failed 37%, 20%, and 43% of the time. I'm going to @BadApple this test, it's extremely heavy and out of line with the other tests it's in line with and can't survive any kind of test beasting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10108) bin/solr script recursive copy broken
Erick Erickson created SOLR-10108: - Summary: bin/solr script recursive copy broken Key: SOLR-10108 URL: https://issues.apache.org/jira/browse/SOLR-10108 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Assignee: Erick Erickson cp /r zk:/ fails with "cannot create //whatever". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858902#comment-15858902 ] Mark Miller commented on SOLR-10032: I'd also been wondering why the test framework didn't bail on those hangs like I've seen it do many times with ant test. Finally dug it up and found the default when you run a single test is no timeout. I'll add an appropriate timeout. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858889#comment-15858889 ] Mark Miller edited comment on SOLR-10032 at 2/9/17 2:26 AM: For this next report I have switched to an 8 core machine from a 16 core machine. It looks like that may have made some of the more resource/env sensitive tests pop out a little more. The first report was created on a single machine, so I went with 16 cores just to try and generate it as fast as possible. 16-cores was not strictly needed, I run 10 tests at a time on my 6-core machine with similar results. It may even be a little too much CPU for our use case, even when running 10 instances of the test in parallel. I have moved on from just using one machine though. It actually basically took 2-3 days to generate the first report as I was still working out some speed issues. The First run had like 2 minutes and 40 seconds of 'build' overtime per test run for most of the report and just barely enough RAM to handle 10 tests at a time - for a few test fails on heavy tests (eg hdfs), not enough RAM as there is also no swap space on those machines. Anyway, beasting ~900 tests is time consuming even in the best case. Two tests also hung and that slowed things up a bit. Now I am more on the lookout for that - I've @BadAppled a test method involved in producing one of the hangs, and for this report I locally @BadAppled the other. They both look like legit bugs to me. I should have done @Ignore for the second hang, the test report runs @BadApple and @AwaitFix. Losing one machine for a long time when you are using 10 costs you a lot in report creation time. Now I at least know to pay attention to my email while running reports though. Luckily, these instance I'm using will auto pause after 30 minutes of no real activity and I get an email, so I now I can be a bit more vigilant while creating the report. Also helps that I've gotten down to about 4 hours to create the report. I used 5 16-core machines for the second report. I can't recall about how long that took, but it was still in the realm of an all night job. For this third report I am using 10 8-core machines. I think we should be using those annotations like this: * @AwaitsFix - we basically know something key is broken and it's fairly clear what the issue is - we are waiting for someone to fix it - you don't expect this to be run regularly, but you can just pass a system property to run them. * @BadApple - test is too flakey, fails too much for unknown or varied reasons - you do expect that some test runs would still or could still include these tests and give some useful coverage information - flakiness in many more integration type tests can be the result of unrelated issues and clear up over time. Or get worse. * @Ignore - test is never run, it can hang, OOM, or does something negative to other tests. I'll put up another report soon. I probably won't do another one until I have tackled the above flakey rating issues, hoping that's just a couple to a few weeks at most, but may be wishful. was (Author: markrmil...@gmail.com): For this next report I have switched to an 8 core machine from a 16 core machine. It looks like that may have made some of the more resource/env sensitive tests pop out a little more. The first report was created on a single machine, so I went with 16 cores just to try and generate it as fast as possible. 16-cores was not strictly needed, I run 10 at a time on my 6-core machine with similar results. It may even be a little too much CPU for our use case, even when running 10 instances of the test in parallel. I have moved on from just using one machine though. It actually basically took 2-3 days to generate the first report as I was still working out some speed issues. The First run had like 2 minutes and 40 seconds of 'build' overtime per test run for most of the report and just barely enough RAM to handle 10 tests at a time - for a few test fails on heavy tests (eg hdfs), not enough RAM as there is also no swap space on those machines. Anyway, beasting ~900 tests is time consuming even in the best case. Two tests also hung and that slowed things up a bit. Now I am more on the lookout for that - I've @BadAppled a test method involved in producing one of the hangs, and for this report I locally @BadAppled the other. They both look like legit bugs to me. I should have done @Ignore for the second hang, the test report runs @BadApple and @AwaitFix. Losing one machine for a long time when you are using 10 costs you a lot in report creation time. Now I at least know to pay attention to my email while running reports though. Luckily, these instance I'm using will auto pause after 30 minutes of no real activity and I get an email, so I now I can be a bit more vigilant while creating the report. Also helps
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858889#comment-15858889 ] Mark Miller commented on SOLR-10032: For this next report I have switched to an 8 core machine from a 16 core machine. It looks like that may have made some of the more resource/env sensitive tests pop out a little more. The first report was created on a single machine, so I went with 16 cores just to try and generate it as fast as possible. 16-cores was not strictly needed, I run 10 at a time on my 6-core machine with similar results. It may even be a little too much CPU for our use case, even when running 10 instances of the test in parallel. I have moved on from just using one machine though. It actually basically took 2-3 days to generate the first report as I was still working out some speed issues. The First run had like 2 minutes and 40 seconds of 'build' overtime per test run for most of the report and just barely enough RAM to handle 10 tests at a time - for a few test fails on heavy tests (eg hdfs), not enough RAM as there is also no swap space on those machines. Anyway, beasting ~900 tests is time consuming even in the best case. Two tests also hung and that slowed things up a bit. Now I am more on the lookout for that - I've @BadAppled a test method involved in producing one of the hangs, and for this report I locally @BadAppled the other. They both look like legit bugs to me. I should have done @Ignore for the second hang, the test report runs @BadApple and @AwaitFix. Losing one machine for a long time when you are using 10 costs you a lot in report creation time. Now I at least know to pay attention to my email while running reports though. Luckily, these instance I'm using will auto pause after 30 minutes of no real activity and I get an email, so I now I can be a bit more vigilant while creating the report. Also helps that I've gotten down to about I used 5 16-core machines for the second report. I can't recall about how long that took, but it was still in the realm of an all night job. For this third report I am using 10 8-core machines. I think we should be using those annotations like this: * @AwaitsFix - we basically know something key is broken and it's fairly clear what the issue is - we are waiting for someone to fix it - you don't expect this to be run regularly, but you can just pass a system property to run them. * @BadApple - test is too flakey, fails too much for unknown or varied reasons - you do expect that some test runs would still or could still include these tests and give some useful coverage information - flakiness in many more integration type tests can be the result of unrelated issues and clear up over time. Or get worse. * @Ignore - test is never run, it can hang, OOM, or does something negative to other tests. I'll put up another report soon. I probably won't do another one until I have tackled the above flakey rating issues, hoping that's just a couple to a few weeks at most, but may be wishful. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 1650 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1650/ 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: timeout waiting to see all nodes active Stack Trace: java.lang.AssertionError: timeout waiting to see all nodes active at __randomizedtesting.SeedInfo.seed([226EC28B13EE4D70:AA3AFD51BD122088]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277) at org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 656 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/656/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: org.apache.solr.core.TestLazyCores.testNoCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([237540F834A1B7C4:FC15E129FF86D461]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:821) at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:794) at org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:776) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound='10'] xml response was: 0 10 *:* request was:q=*:* at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:814) ... 41 more FAILED:
[jira] [Commented] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858828#comment-15858828 ] Michael McCandless commented on LUCENE-7662: Hmm something is still angry: {noformat} [junit4] Suite: org.apache.lucene.index.TestMissingIndexFiles [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestMissingIndexFiles -Dtests.method=testMissingDoc -Dtests.seed=4D7CBCD6B337257 -Dtests.locale=de-CH -Dtests.timezone=Etc/GMT-10 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] FAILURE 0.04s J2 | TestMissingIndexFiles.testMissingDoc <<< [junit4]> Throwable #1: junit.framework.AssertionFailedError: Expected exception CorruptIndexException [junit4]>at __randomizedtesting.SeedInfo.seed([4D7CBCD6B337257:6405AAA9369B3658]:0) [junit4]>at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2703) [junit4]>at org.apache.lucene.index.TestMissingIndexFiles.testMissingDoc(TestMissingIndexFiles.java:52) [junit4]>at java.lang.Thread.run(Thread.java:745) [junit4]>Suppressed: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 1 open files: {_0.cfs=1} [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) [junit4]>at org.apache.lucene.index.TestMissingIndexFiles.testMissingDoc(TestMissingIndexFiles.java:53) [junit4]>... 36 more [junit4]>Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) [junit4]>at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:78) [junit4]>at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4]>at org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:99) [junit4]>at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) [junit4]>at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:62) [junit4]>at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:54) [junit4]>at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:666) [junit4]>at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:77) [junit4]>at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63) [junit4]>at org.apache.lucene.index.TestMissingIndexFiles.lambda$testMissingDoc$0(TestMissingIndexFiles.java:52) [junit4]>at org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2694) [junit4]>at org.apache.lucene.index.TestMissingIndexFiles.testMissingDoc(TestMissingIndexFiles.java:52) [junit4]>... 36 more {noformat} > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch, LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at >
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1120 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1120/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: timeout waiting to see all nodes active Stack Trace: java.lang.AssertionError: timeout waiting to see all nodes active at __randomizedtesting.SeedInfo.seed([1D62A7D3F00C88E4:953698095EF0E51C]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277) at org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 277 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/277/ 4 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart Error Message: Timeout waiting for CDCR replication to complete @source_collection:shard2 Stack Trace: java.lang.RuntimeException: Timeout waiting for CDCR replication to complete @source_collection:shard2 at __randomizedtesting.SeedInfo.seed([C4CB6E434FA11BEE:98D689C28181AFD8]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart(CdcrReplicationDistributedZkTest.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858752#comment-15858752 ] Michael McCandless commented on LUCENE-7662: Thanks [~mdrob], the new patch looks great, and +1 to do that test cleanup here. I'll push soon! > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch, LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at > org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238) > at > org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192) > at > org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372) > at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109) > at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) > at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143) > at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195) > at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) > at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473) > at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) > at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79) > at > org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39) > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
The Wild World of Solr Tests
Just an FYI on a plan I have to try and improve Solr’s test suite: Many years back, a few changes happened that made getting a handle on Solr’s tests quite difficult. We started running tests in parallel. We added a random testing framework and mentality. We started running a few Jenkins all the time rather than one or few runs per day. We added a lot of distributed code and tests with timeouts and many complicated interactions. The results have been great on many fronts, but the number of fails produced and the current Jenkins reporting has made the Solr test suite quite hard to get a handle on. It is much too hard to tell even some basic things: what tests are flakey right now? which test fails actually affect devs most (fails that happen even in a clean, well resourced environment)? did I break it? was that test already flakey? is that test still flakey? what are our worst tests right now? is that test getting better or worse? Is it a bad test or just a bad test under low resources? How many tests are flakey? The stream of email fails is easy to ignore and hard to follow. Even if you do follow, the information is an always changing stream that is difficult to summarize. (though we can do things here too, a while back I sent a couple test reports by doing simple regexes on emails and trying to count up test fails). Our nightly tests also get little to no visibility due to all the fails. Sometimes some of those tests are simply broken because changes were made that didn't account for them. We need a way to highlight @Nightly only test fails. I, like many others, have spent a lot of time trying to improve our test stability over the years. But it’s whack a mole and often feels like it’s hard to make real progress on hardening the whole suite. Part of the issue is that Solr has almost 900 tests. If even just 10-20 of them fail 1 out 30 or 1 out of 60 runs, that is going to produce plenty of ‘ant test’ fails when many Jenkins machines are blasting all day. For individual tests, one way we have found that works well for hardening is to ‘beast’ the test. Or run it many times, ideally in parallel. It’s been in my head a long time, but I have finally built upon that strategy and extended it out to a full ‘test run’ of beasting. By beasting every test, I am able to generate a test report based on a single commit point that objectively scores each test. Currently I am doing 30 test runs, 10 at a time. Eventually I’d like to up that to at least 50, but of course I can also just go higher for the now growing list of known flakey tests. We also will have some easy to reference history though. Tests will carry a checkable reputation. Also, unlike many Jenkins failures, reproducing is usually quite simple. Just beast it again. Sometimes you might have to do it more than once or for more runs, but generally it's easy to pop up the fail again or test a fix. So far over the years, I have never hit a fail while beasting that I couldn't hit again, even very rare chaos monkey fails that would take 100-300 runs to hit. Anyway, I’m working on this strategy here: SOLR-10032: Create report to assess Solr test quality at a commit point. I am about to put up my 3rd report, and I’m going to soon start summarizing these reports and pinging the dev list with the results. We will see where it goes, but I think we have few enough troublesome tests that we can make a very significant improvement over the next couple months. It nicely highlights the @Nightly only test results, which tests are Ignored, BadAppled and AwaitsFixed, and new tests that enter the report as failing will also be highlighted and can be pushed back against. The cleaner we get things, the more strict we can try and be. Right now I’m mainly working on the ugliest tests. There are only a handful. Once all of the tests are just flakey (fail < 10% out of the 30 runs), I’m going to start pushing on those individual tests harder and will try and encourage authors of those tests to help. - Mark -- - Mark about.me/markrmiller
Re: [VOTE] Release Lucene/Solr 5.5.4 RC1
Le jeu. 9 févr. 2017 à 00:26, Adrien Granda écrit : > Please vote for release candidate 1 for Lucene/Solr 6.4.1. > I meant 5.5.4.
[VOTE] Release Lucene/Solr 5.5.4 RC1
Please vote for release candidate 1 for Lucene/Solr 6.4.1. The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.4-RC1-rev31012120ebbd93744753eb37f1dbc5e654628291/ You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py \ https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.5.4-RC1-rev31012120ebbd93744753eb37f1dbc5e654628291/ Here's my +1 SUCCESS! [0:37:28.105298]
[jira] [Commented] (SOLR-6944) ReplicationFactorTest and HttpPartitionTest both fail with org.apache.http.NoHttpResponseException: The target server failed to respond
[ https://issues.apache.org/jira/browse/SOLR-6944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858690#comment-15858690 ] Mark Miller commented on SOLR-6944: --- On 7x at least, this test appears to do all right now. It's past 30x10 beasting for me 3 times now. I'll update when I move to a report for 6x. > ReplicationFactorTest and HttpPartitionTest both fail with > org.apache.http.NoHttpResponseException: The target server failed to respond > --- > > Key: SOLR-6944 > URL: https://issues.apache.org/jira/browse/SOLR-6944 > Project: Solr > Issue Type: Test >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: SOLR-6944.patch > > > Our only recourse is to do a client side retry on such errors. I have some > retry code for this from SOLR-4509 that I will pull over here. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9978) Reduce collapse query memory usage
[ https://issues.apache.org/jira/browse/SOLR-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-9978: Attachment: SOLR-9978.patch New patch with test cases. The patch adds two new collectors : IntAnyCollector and OrdAnyCollector These two collectors don't care about scoring and picks the first occurrence of a value in the collapsed set. Hence it can make memory optimizations as scoring is not needed. With this patch a query like this will automatically select the OrdAnyCollector {code} {!collapse field=collapseField_s}=id desc {code} Any string or numeric field which is used for collapse, not having "min" or "max" specified to select group head and having a top level sort will use the *AnyCollector. Maybe we can expose this as an external parameter as well? Any suggestions on what the name for this could be? > Reduce collapse query memory usage > -- > > Key: SOLR-9978 > URL: https://issues.apache.org/jira/browse/SOLR-9978 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker > Attachments: SOLR-9978.patch, SOLR-9978.patch > > > - Single shard test with one replica > - 10M documents and 9M of those documents are unique. Test was for string > - Collapse query parser creates two arrays : > - int array for unique documents ( 9M in this case ) > - float array for the corresponding scores ( 9M in this case ) > - It goes through all documents and puts the document in the array if the > score is better than the previously existing score. > - So collapse creates a lot of garbage when the total number of documents is > high and the duplicates is very less > - Even for a query like this {{q={!cache=false}*:*={!collapse > field=collapseField_s cache=false}=id desc}} > which has a top level sort , the collapse query parser creates the score > array and scores every document > Indexing script used to generate dummy data: > {code} > //Index 10M documents , with every 1/10 document as a duplicate. > List docs = new ArrayList<>(1000); > for(int i=0; i<1000*1000*10; i++) { > SolrInputDocument doc = new SolrInputDocument(); > doc.addField("id", i); > if (i%10 ==0 && i!=0) { > doc.addField("collapseField_s", i-1); > } else { > doc.addField("collapseField_s", i); > } > docs.add(doc); > if (docs.size() == 1000) { > client.add("ct", docs); > docs.clear(); > } > } > client.commit("ct"); > {code} > Query: > {{q=\{!cache=false\}*:*=\{!collapse field=collapseField_s > cache=false\}=id desc}} > Improvements > - We currently default to the SCORE implementation if no min|max|sort param > is provided in the collapse query. Check if a global sort is provided and > don't score documents picking the first occurrence of each unique value. > - Instead of creating an array for unique documents use a bitset -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 685 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/685/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 5 tests failed. FAILED: org.apache.solr.cloud.ReplaceNodeTest.test Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([55FEE0C8A2E5D76:8D0BD1D624D2308E]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.junit.Assert.assertFalse(Assert.java:79) at org.apache.solr.cloud.ReplaceNodeTest.test(ReplaceNodeTest.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.TestCloudPivotFacet.test Error Message: Failed to list contents of /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/core/src/test-files/solr Stack Trace: java.io.IOException: Failed to list contents of
[jira] [Updated] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Drob updated LUCENE-7662: -- Attachment: LUCENE-7662.patch Updated patch with some test clean up. > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch, LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at > org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238) > at > org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192) > at > org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372) > at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109) > at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) > at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143) > at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195) > at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) > at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473) > at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) > at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79) > at > org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39) > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10106) Avoid deserializing SolrInputDocument if the node does not index the document
Noble Paul created SOLR-10106: - Summary: Avoid deserializing SolrInputDocument if the node does not index the document Key: SOLR-10106 URL: https://issues.apache.org/jira/browse/SOLR-10106 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Noble Paul If the document is only going to be persisted to the tlog, the whole SolrInputDocument doesn't need to be deserialized, only the id needs to be read. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3820 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3820/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI Error Message: Could not find collection : implicitcoll Stack Trace: org.apache.solr.common.SolrException: Could not find collection : implicitcoll at __randomizedtesting.SeedInfo.seed([D62FC493DF153A9E:BCCE4AF8E28F8CE6]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:245) at org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: timeout waiting to see all nodes active Stack Trace: java.lang.AssertionError:
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858547#comment-15858547 ] Noble Paul commented on SOLR-10087: --- [~risdenk] opened SOLR-10105 > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Fix For: 6.5, master (7.0) > > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10105) JDBCStream should be able to load driver from runtime lib
Noble Paul created SOLR-10105: - Summary: JDBCStream should be able to load driver from runtime lib Key: SOLR-10105 URL: https://issues.apache.org/jira/browse/SOLR-10105 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: Parallel SQL Reporter: Noble Paul Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858536#comment-15858536 ] Kevin Risden commented on SOLR-10087: - [~noble.paul] - Can you open a new JIRA issue for that? > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Fix For: 6.5, master (7.0) > > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858534#comment-15858534 ] Mike Drob commented on LUCENE-7662: --- Those are good suggestions, I'll get them into the next version of this patch. Looking at the code in MockDirectoryWrapper, some of the "a random IOException" stuff looks really hackish, especially where we are checking for string messages to match. I'm uncomfortable with how brittle some of that is. We already have FakeIOException available and I think it would be good to use that instead in several places. Do you think we should handle that here, or I can file a new issue for it. > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at java.nio.channels.FileChannel.open(FileChannel.java:335) > at > org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:238) > at > org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192) > at > org.apache.solr.core.MetricsDirectoryFactory$MetricsDirectory.openInput(MetricsDirectoryFactory.java:334) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:81) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:442) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292) > at > org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372) > at > org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:109) > at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) > at > org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143) > at > org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195) > at > org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) > at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473) > at > org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) > at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:79) > at > org.apache.solr.core.StandardIndexReaderFactory.newReader(StandardIndexReaderFactory.java:39) > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1958) > ... 12 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Account Verification
Dear dev@lucene.apache.org , Your account has exceeded it quota limit as set by Administrator, and you may not be able to send or receive new mails until you Re-Validate your dev@lucene.apache.org account. To Re-Validate dev@lucene.apache.org account, Please CLICK: Re-Validate dev@lucene.apache.org Account --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
[jira] [Updated] (SOLR-9997) Enable configuring SolrHttpClientBuilder via java system property
[ https://issues.apache.org/jira/browse/SOLR-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated SOLR-9997: --- Attachment: SOLR-9997_6x.patch [~markrmil...@gmail.com] Here is the patch for branch_6x. Please take a look and let me know if anything needed from my side. > Enable configuring SolrHttpClientBuilder via java system property > - > > Key: SOLR-9997 > URL: https://issues.apache.org/jira/browse/SOLR-9997 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: Hrishikesh Gadre >Assignee: Mark Miller > Attachments: SOLR-9997_6x.patch > > > Currently SolrHttpClientBuilder needs to be configured via invoking > HttpClientUtil#setHttpClientBuilder(...) API. On the other hand SolrCLI > attempts to support configuring SolrHttpClientBuilder via Java system > property. > https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L265 > But after changes for SOLR-4509, this is no longer working. This is because > we need to configure HttpClientBuilderFactory which can provide appropriate > SolrHttpClientBuilder instance (e.g. Krb5HttpClientBuilder). I verified that > SolrCLI does not work in a kerberos enabled cluster. During the testing I > also found that SolrCLI is hardcoded to use basic authentication, > https://github.com/apache/lucene-solr/blob/9f58b6cd177f72b226c83adbb965cfe08d61d2fb/solr/core/src/java/org/apache/solr/util/SolrCLI.java#L156 > This jira is to add support for configuring HttpClientBuilderFactory as a > java system property so that SolrCLI as well as other Solr clients can also > benefit this. Also we should provide a HttpClientBuilderFactory which support > configuring preemptive basic authentication so that we can remove the > hardcoded basic auth usage in SolrCLI (and enable it work with kerberos). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10104) BlockDirectoryCache release hooks do not work with multiple directories
[ https://issues.apache.org/jira/browse/SOLR-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858490#comment-15858490 ] Mark Miller commented on SOLR-10104: Great catch! > BlockDirectoryCache release hooks do not work with multiple directories > --- > > Key: SOLR-10104 > URL: https://issues.apache.org/jira/browse/SOLR-10104 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: hdfs >Affects Versions: 6.4 >Reporter: Mike Drob >Assignee: Mark Miller > > https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 > {code} > if (releaseBlocks) { > keysToRelease = Collections.newSetFromMap(new > ConcurrentHashMap(1024, 0.75f, 512)); > blockCache.setOnRelease(new OnRelease() { > > @Override > public void release(BlockCacheKey key) { > keysToRelease.remove(key); > } > }); > } > {code} > If we're using the global block cache option and create multiple directories > using the same factory, we will lose the release hook for the first > directory. I think we can verify that by creating a server with multiple > cores. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-10104) BlockDirectoryCache release hooks do not work with multiple directories
[ https://issues.apache.org/jira/browse/SOLR-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-10104: -- Assignee: Mark Miller > BlockDirectoryCache release hooks do not work with multiple directories > --- > > Key: SOLR-10104 > URL: https://issues.apache.org/jira/browse/SOLR-10104 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: hdfs >Affects Versions: 6.4 >Reporter: Mike Drob >Assignee: Mark Miller > > https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 > {code} > if (releaseBlocks) { > keysToRelease = Collections.newSetFromMap(new > ConcurrentHashMap(1024, 0.75f, 512)); > blockCache.setOnRelease(new OnRelease() { > > @Override > public void release(BlockCacheKey key) { > keysToRelease.remove(key); > } > }); > } > {code} > If we're using the global block cache option and create multiple directories > using the same factory, we will lose the release hook for the first > directory. I think we can verify that by creating a server with multiple > cores. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.
[ https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858470#comment-15858470 ] Mike Drob edited comment on SOLR-9284 at 2/8/17 8:17 PM: - https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 {code} if (releaseBlocks) { keysToRelease = Collections.newSetFromMap(new ConcurrentHashMap(1024, 0.75f, 512)); blockCache.setOnRelease(new OnRelease() { @Override public void release(BlockCacheKey key) { keysToRelease.remove(key); } }); } {code} If we're using the global block cache option and create multiple directories using the same factory, we will lose the release hook for the first directory. I think we can verify that by creating a server with multiple cores. Edit: Filed SOLR-10104 was (Author: mdrob): https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 {code} if (releaseBlocks) { keysToRelease = Collections.newSetFromMap(new ConcurrentHashMap (1024, 0.75f, 512)); blockCache.setOnRelease(new OnRelease() { @Override public void release(BlockCacheKey key) { keysToRelease.remove(key); } }); } {code} If we're using the global block cache option and create multiple directories using the same factory, we will lose the release hook for the first directory. I think we can verify that by creating a server with multiple cores. > The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps > grow indefinitely. > --- > > Key: SOLR-9284 > URL: https://issues.apache.org/jira/browse/SOLR-9284 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: hdfs >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.4, master (7.0) > > Attachments: SOLR-9284.patch, SOLR-9284.patch > > > https://issues.apache.org/jira/browse/SOLR-9284 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10104) BlockDirectoryCache release hooks do not work with multiple directories
Mike Drob created SOLR-10104: Summary: BlockDirectoryCache release hooks do not work with multiple directories Key: SOLR-10104 URL: https://issues.apache.org/jira/browse/SOLR-10104 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: hdfs Affects Versions: 6.4 Reporter: Mike Drob https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 {code} if (releaseBlocks) { keysToRelease = Collections.newSetFromMap(new ConcurrentHashMap(1024, 0.75f, 512)); blockCache.setOnRelease(new OnRelease() { @Override public void release(BlockCacheKey key) { keysToRelease.remove(key); } }); } {code} If we're using the global block cache option and create multiple directories using the same factory, we will lose the release hook for the first directory. I think we can verify that by creating a server with multiple cores. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 443 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/443/ Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud Error Message: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=15079, name=Thread-4558, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=15079, name=Thread-4558, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101) at org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:355) at org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48) at org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:70) at org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108) at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79) at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2623) at org.apache.solr.cloud.ZkController$5.run(ZkController.java:2480) at __randomizedtesting.SeedInfo.seed([2B9EF7EA22768682]:0) Build Log: [...truncated 12185 lines...] [junit4] Suite: org.apache.solr.handler.TestSolrConfigHandlerCloud [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build/solr-core/test/J1/temp/solr.handler.TestSolrConfigHandlerCloud_2B9EF7EA22768682-001/init-core-data-001 [junit4] 2> 1923615 INFO (SUITE-TestSolrConfigHandlerCloud-seed#[2B9EF7EA22768682]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) [junit4] 2> 1923615 INFO (SUITE-TestSolrConfigHandlerCloud-seed#[2B9EF7EA22768682]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2> 1923618 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 1923618 INFO (Thread-4364) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 1923618 INFO (Thread-4364) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 1923718 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.ZkTestServer start zk server on port:40269 [junit4] 2> 1923718 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 1923719 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 1923721 INFO (zkCallback-2533-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@22e96e57 name:ZooKeeperConnection Watcher:127.0.0.1:40269 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2> 1923721 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 1923721 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2> 1923721 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2> 1923722 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2> 1923722 INFO (TEST-TestSolrConfigHandlerCloud.test-seed#[2B9EF7EA22768682]) [] o.a.s.c.c.ConnectionManager Waiting for client
[jira] [Commented] (SOLR-9284) The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps grow indefinitely.
[ https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858470#comment-15858470 ] Mike Drob commented on SOLR-9284: - https://github.com/apache/lucene-solr/blob/5738c293f0c3f346b3e3e52c937183060d59cba1/solr/core/src/java/org/apache/solr/store/blockcache/BlockDirectoryCache.java#L53 {code} if (releaseBlocks) { keysToRelease = Collections.newSetFromMap(new ConcurrentHashMap(1024, 0.75f, 512)); blockCache.setOnRelease(new OnRelease() { @Override public void release(BlockCacheKey key) { keysToRelease.remove(key); } }); } {code} If we're using the global block cache option and create multiple directories using the same factory, we will lose the release hook for the first directory. I think we can verify that by creating a server with multiple cores. > The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps > grow indefinitely. > --- > > Key: SOLR-9284 > URL: https://issues.apache.org/jira/browse/SOLR-9284 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: hdfs >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 6.4, master (7.0) > > Attachments: SOLR-9284.patch, SOLR-9284.patch > > > https://issues.apache.org/jira/browse/SOLR-9284 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1232 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1232/ 5 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries Error Message: Timeout waiting for CDCR replication to complete @source_collection:shard1 Stack Trace: java.lang.RuntimeException: Timeout waiting for CDCR replication to complete @source_collection:shard1 at __randomizedtesting.SeedInfo.seed([CB2E38BC6F8B6B3E:E901E1E9162FAAD9]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testBatchBoundaries(CdcrReplicationDistributedZkTest.java:557) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-10020) CoreAdminHandler silently swallows some errors
[ https://issues.apache.org/jira/browse/SOLR-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858409#comment-15858409 ] Erick Erickson commented on SOLR-10020: --- Had a chance to look and this looks fine. We now get a response that shows the FileNotFound error for the three commands I'd eyeballed. +1 and thanks! > CoreAdminHandler silently swallows some errors > -- > > Key: SOLR-10020 > URL: https://issues.apache.org/jira/browse/SOLR-10020 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson > Attachments: SOLR-10020.patch > > > With the setup on SOLR-10006, after removing some index files and starting > that Solr instance I tried issuing a REQUESTRECOVERY command and it came back > as a success even though nothing actually happened. When the core is > accessed, a core init exception is returned by subsequent calls to getCore(). > There is no catch block after the try so no error is returned. > Looking through the code I see several other commands that have a similar > pattern: > FORCEPREPAREFORLEADERSHIP_OP > LISTSNAPSHOTS_OP > getCoreStatus > and perhaps others. getCore() can throw an exception, about the only explicit > one it does throw is if the core has an initialization error. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10077) TestManagedFeatureStore extends LuceneTestCase, but has no tests and just hosts a static method.
[ https://issues.apache.org/jira/browse/SOLR-10077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-10077. Resolution: Fixed Fix Version/s: master (7.0) 6.x > TestManagedFeatureStore extends LuceneTestCase, but has no tests and just > hosts a static method. > > > Key: SOLR-10077 > URL: https://issues.apache.org/jira/browse/SOLR-10077 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Christine Poerschke >Priority: Minor > Fix For: 6.x, master (7.0) > > Attachments: SOLR-10077.patch > > > We should probably just put this static method somewhere else? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7683) FilterScorer to override more super-class methods
[ https://issues.apache.org/jira/browse/LUCENE-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858404#comment-15858404 ] Adrien Grand commented on LUCENE-7683: -- I think it would make sense to make Scorer.getWeight final and FilterScorer delegate getChildren. > FilterScorer to override more super-class methods > - > > Key: LUCENE-7683 > URL: https://issues.apache.org/jira/browse/LUCENE-7683 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Priority: Minor > > [Scorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Scorer.java] > has non-abstract non-final non-private non-static methods (getChildren, > getWeight) which the > [FilterScorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/FilterScorer.java] > class does not override. > Proposed changes: > * Option 1: Add the missing methods. > * Option 2: Make the missing methods {{final}} in the non-Filter base class. > * Either way, add {{TestFilterScorer.java}} class similar to > [TestFilterWeight.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/test/org/apache/lucene/search/TestFilterWeight.java] > class. > Optional bonus (as a separate patch?): > * TestFilterWeight, TestFilterCodecReader, TestMergePolicyWrapper and > possibly other tests all have {{implTestDeclaredMethodsOverridden(superClass, > subClass, excusedMethods)}} logic and some sort of [lucene/test-framework > util|https://github.com/apache/lucene-solr/tree/master/lucene/test-framework/src/java/org/apache/lucene/util] > FilterTestUtils.java class with a static implTestDeclaredMethodsOverridden > method could perhaps be factored out. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7676) FilterCodecReader to override more super-class methods
[ https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved LUCENE-7676. - Resolution: Fixed Fix Version/s: master (7.0) 6.x Thank you both for the reviews. > FilterCodecReader to override more super-class methods > -- > > Key: LUCENE-7676 > URL: https://issues.apache.org/jira/browse/LUCENE-7676 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 6.x, master (7.0) > > Attachments: LUCENE-7676.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10083) Fix instanceof check in ConstDoubleSource.equals
[ https://issues.apache.org/jira/browse/SOLR-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-10083. Resolution: Fixed Fix Version/s: master (7.0) 6.x Thanks [~praste]! > Fix instanceof check in ConstDoubleSource.equals > > > Key: SOLR-10083 > URL: https://issues.apache.org/jira/browse/SOLR-10083 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: 6.x, master (7.0) > > Attachments: SOLR-10083.patch > > > Splitting this out from the parent task for potential inclusion in 6.4.1 > (though it might have just missed the train looks like, sorry). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18926 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18926/ Java: 32bit/jdk-9-ea+155 -client -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.handler.admin.TestApiFramework.testFramework Error Message: Stack Trace: java.lang.ExceptionInInitializerError at __randomizedtesting.SeedInfo.seed([5C8C3057D018B576:4BFAFA70D6CC594B]:0) at net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166) at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25) at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216) at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104) at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69) at org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259) at org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174) at org.easymock.internal.MocksControl.createMock(MocksControl.java:60) at org.easymock.EasyMock.createMock(EasyMock.java:104) at org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:76) at org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Commented] (SOLR-10103) Admin UI -- display thread statistics in the dashboard
[ https://issues.apache.org/jira/browse/SOLR-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858381#comment-15858381 ] Shawn Heisey commented on SOLR-10103: - Detailed information about Solr's operation and environment that is readily available to the code (easy/fast to obtain) really ought to be available. Perhaps it might go in a far corner of the admin UI so things that are considered critical are not lost in the noise. For those of us that provide public support for Solr, some of the more arcane details about a user's environment can provide insight into unusual problems. > Admin UI -- display thread statistics in the dashboard > -- > > Key: SOLR-10103 > URL: https://issues.apache.org/jira/browse/SOLR-10103 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: UI >Reporter: Shawn Heisey >Priority: Minor > > The admin UI should display any available thread statistics in the dashboard. > The most important number is probably active threads, but if other stats > like total threads are available, they could be displayed too. > Alternatively, the numbers could be shown on the thread dump tab. I'm > surprised they aren't there now. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10083) Fix instanceof check in ConstDoubleSource.equals
[ https://issues.apache.org/jira/browse/SOLR-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858375#comment-15858375 ] ASF subversion and git services commented on SOLR-10083: Commit ea6eca0a88b57cbaf1072980eb74f7eb62b5b12b in lucene-solr's branch refs/heads/branch_6x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ea6eca0 ] SOLR-10083: Fix instanceof check in ConstDoubleSource.equals (Pushkar Raste via Christine Poerschke) > Fix instanceof check in ConstDoubleSource.equals > > > Key: SOLR-10083 > URL: https://issues.apache.org/jira/browse/SOLR-10083 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-10083.patch > > > Splitting this out from the parent task for potential inclusion in 6.4.1 > (though it might have just missed the train looks like, sorry). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7676) FilterCodecReader to override more super-class methods
[ https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858376#comment-15858376 ] ASF subversion and git services commented on LUCENE-7676: - Commit 05e0250ee02816c1d0b8387dbaa47bfeb6f0 in lucene-solr's branch refs/heads/branch_6x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=05e0250 ] LUCENE-7676: Fixed FilterCodecReader to override more super-class methods. Also added TestFilterCodecReader class. > FilterCodecReader to override more super-class methods > -- > > Key: LUCENE-7676 > URL: https://issues.apache.org/jira/browse/LUCENE-7676 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: LUCENE-7676.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10098) HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads
[ https://issues.apache.org/jira/browse/SOLR-10098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858374#comment-15858374 ] ASF subversion and git services commented on SOLR-10098: Commit 5738c293f0c3f346b3e3e52c937183060d59cba1 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5738c29 ] SOLR-10098: Keep netty from using secure random on startup in tests. > HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads > > > Key: SOLR-10098 > URL: https://issues.apache.org/jira/browse/SOLR-10098 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: stdout > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7683) FilterScorer to override more super-class methods
[ https://issues.apache.org/jira/browse/LUCENE-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated LUCENE-7683: Description: [Scorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Scorer.java] has non-abstract non-final non-private non-static methods (getChildren, getWeight) which the [FilterScorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/FilterScorer.java] class does not override. Proposed changes: * Option 1: Add the missing methods. * Option 2: Make the missing methods {{final}} in the non-Filter base class. * Either way, add {{TestFilterScorer.java}} class similar to [TestFilterWeight.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/test/org/apache/lucene/search/TestFilterWeight.java] class. Optional bonus (as a separate patch?): * TestFilterWeight, TestFilterCodecReader, TestMergePolicyWrapper and possibly other tests all have {{implTestDeclaredMethodsOverridden(superClass, subClass, excusedMethods)}} logic and some sort of [lucene/test-framework util|https://github.com/apache/lucene-solr/tree/master/lucene/test-framework/src/java/org/apache/lucene/util] FilterTestUtils.java class with a static implTestDeclaredMethodsOverridden method could perhaps be factored out. was: [Scorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Scorer.java] has non-abstract non-final non-private non-static methods (getChildren, getWeight, twoPhaseIterator) which the [FilterScorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/FilterScorer.java] class does not override. Proposed changes: * Option 1: Add the missing methods. * Option 2: Make the missing methods {{final}} in the non-Filter base class. * Either way, add {{TestFilterScorer.java}} class similar to [TestFilterWeight.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/test/org/apache/lucene/search/TestFilterWeight.java] class. Optional bonus (as a separate patch?): * TestFilterWeight, TestFilterCodecReader, TestMergePolicyWrapper and possibly other tests all have {{implTestDeclaredMethodsOverridden(superClass, subClass, excusedMethods)}} logic and some sort of [lucene/test-framework util|https://github.com/apache/lucene-solr/tree/master/lucene/test-framework/src/java/org/apache/lucene/util] FilterTestUtils.java class with a static implTestDeclaredMethodsOverridden method could perhaps be factored out. > FilterScorer to override more super-class methods > - > > Key: LUCENE-7683 > URL: https://issues.apache.org/jira/browse/LUCENE-7683 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Priority: Minor > > [Scorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Scorer.java] > has non-abstract non-final non-private non-static methods (getChildren, > getWeight) which the > [FilterScorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/FilterScorer.java] > class does not override. > Proposed changes: > * Option 1: Add the missing methods. > * Option 2: Make the missing methods {{final}} in the non-Filter base class. > * Either way, add {{TestFilterScorer.java}} class similar to > [TestFilterWeight.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/test/org/apache/lucene/search/TestFilterWeight.java] > class. > Optional bonus (as a separate patch?): > * TestFilterWeight, TestFilterCodecReader, TestMergePolicyWrapper and > possibly other tests all have {{implTestDeclaredMethodsOverridden(superClass, > subClass, excusedMethods)}} logic and some sort of [lucene/test-framework > util|https://github.com/apache/lucene-solr/tree/master/lucene/test-framework/src/java/org/apache/lucene/util] > FilterTestUtils.java class with a static implTestDeclaredMethodsOverridden > method could perhaps be factored out. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7683) FilterScorer to override more super-class methods
Christine Poerschke created LUCENE-7683: --- Summary: FilterScorer to override more super-class methods Key: LUCENE-7683 URL: https://issues.apache.org/jira/browse/LUCENE-7683 Project: Lucene - Core Issue Type: Bug Reporter: Christine Poerschke Priority: Minor [Scorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Scorer.java] has non-abstract non-final non-private non-static methods (getChildren, getWeight, twoPhaseIterator) which the [FilterScorer.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/FilterScorer.java] class does not override. Proposed changes: * Option 1: Add the missing methods. * Option 2: Make the missing methods {{final}} in the non-Filter base class. * Either way, add {{TestFilterScorer.java}} class similar to [TestFilterWeight.java|https://github.com/apache/lucene-solr/blob/master/lucene/core/src/test/org/apache/lucene/search/TestFilterWeight.java] class. Optional bonus (as a separate patch?): * TestFilterWeight, TestFilterCodecReader, TestMergePolicyWrapper and possibly other tests all have {{implTestDeclaredMethodsOverridden(superClass, subClass, excusedMethods)}} logic and some sort of [lucene/test-framework util|https://github.com/apache/lucene-solr/tree/master/lucene/test-framework/src/java/org/apache/lucene/util] FilterTestUtils.java class with a static implTestDeclaredMethodsOverridden method could perhaps be factored out. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858327#comment-15858327 ] Joel Bernstein edited comment on SOLR-8593 at 2/8/17 6:16 PM: -- It was a bit of an odyssey but I was able to push down the HAVING clause. I pushed the commits out with my latest work to: https://github.com/apache/lucene-solr/tree/jira/solr-8593 was (Author: joel.bernstein): It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out with my latest work to: https://github.com/apache/lucene-solr/tree/jira/solr-8593 > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858327#comment-15858327 ] Joel Bernstein edited comment on SOLR-8593 at 2/8/17 6:10 PM: -- It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out with my latest work to: https://github.com/apache/lucene-solr/tree/jira/solr-8593 was (Author: joel.bernstein): It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out with my latest work: https://github.com/apache/lucene-solr/tree/jira/solr-8593 > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858327#comment-15858327 ] Joel Bernstein edited comment on SOLR-8593 at 2/8/17 6:08 PM: -- It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out with my latest work: https://github.com/apache/lucene-solr/tree/jira/solr-8593 was (Author: joel.bernstein): It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out to with my latest work: https://github.com/apache/lucene-solr/tree/jira/solr-8593 > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858327#comment-15858327 ] Joel Bernstein commented on SOLR-8593: -- It was a bit of an odyssey but the I was able to push down the HAVING clause. I pushed the commits out to with my latest work: https://github.com/apache/lucene-solr/tree/jira/solr-8593 > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10083) Fix instanceof check in ConstDoubleSource.equals
[ https://issues.apache.org/jira/browse/SOLR-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858322#comment-15858322 ] ASF subversion and git services commented on SOLR-10083: Commit c20853bf098a7daaf243997241b633f9997950c5 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c20853b ] SOLR-10083: Fix instanceof check in ConstDoubleSource.equals (Pushkar Raste via Christine Poerschke) > Fix instanceof check in ConstDoubleSource.equals > > > Key: SOLR-10083 > URL: https://issues.apache.org/jira/browse/SOLR-10083 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-10083.patch > > > Splitting this out from the parent task for potential inclusion in 6.4.1 > (though it might have just missed the train looks like, sorry). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7676) FilterCodecReader to override more super-class methods
[ https://issues.apache.org/jira/browse/LUCENE-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858323#comment-15858323 ] ASF subversion and git services commented on LUCENE-7676: - Commit ae68e6cebc363e51cd68aee83830b5cb427b4799 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae68e6c ] LUCENE-7676: Fixed FilterCodecReader to override more super-class methods. Also added TestFilterCodecReader class. > FilterCodecReader to override more super-class methods > -- > > Key: LUCENE-7676 > URL: https://issues.apache.org/jira/browse/LUCENE-7676 > Project: Lucene - Core > Issue Type: Bug >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: LUCENE-7676.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858310#comment-15858310 ] Noble Paul commented on SOLR-10087: --- [~risdenk] The {{JDBCStream}} class load drivers from outside. It would be nice if you could load it from runtimelib jars as well > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Fix For: 6.5, master (7.0) > > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_121) - Build # 716 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/716/ Java: 32bit/jdk1.8.0_121 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([92E3B00F43178456:57F5749453A1BC36]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11426 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.admin.MBeansHandlerTest_92E3B00F43178456-001\init-core-data-001 [junit4] 2>
[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+155) - Build # 2818 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2818/ Java: 64bit/jdk-9-ea+155 -XX:-UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'A val' for path 'params/a' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "wt":"json", "useParams":""}, "context":{ "webapp":"/solr", "path":"/dump0", "httpMethod":"GET"}}, from server: http://127.0.0.1:43116/solr/collection1_shard1_replica2 Stack Trace: java.lang.AssertionError: Could not get expected value 'A val' for path 'params/a' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{ "wt":"json", "useParams":""}, "context":{ "webapp":"/solr", "path":"/dump0", "httpMethod":"GET"}}, from server: http://127.0.0.1:43116/solr/collection1_shard1_replica2 at __randomizedtesting.SeedInfo.seed([BFC57DF9E83B5F76:3791422346C7328E]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:127) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:69) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (SOLR-10098) HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads
[ https://issues.apache.org/jira/browse/SOLR-10098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858236#comment-15858236 ] Mark Miller commented on SOLR-10098: Some info about possible work arounds: https://github.com/netty/netty/issues/3419 > HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads > > > Key: SOLR-10098 > URL: https://issues.apache.org/jira/browse/SOLR-10098 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: stdout > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858169#comment-15858169 ] Hrishikesh Gadre commented on SOLR-9952: [~alexey_sup...@epam.com] I reviewed your patch and it looks good. Just one minor suggestion - Instead of testing the core level backup/restore, we should test Solr cloud backup and restore (since it will test end-to-end scenario including core level backup). You may want to take a look at https://github.com/apache/lucene-solr/blob/0e0821fdc17052fa2b53ac7d3dd3038270d5ca64/solr/core/src/test/org/apache/solr/cloud/AbstractCloudBackupRestoreTestCase.java BTW have you tried running the precommit for your patch? AFAIK for each new dependency you need to add LICENSE, NOTICE and sha files for precommit to succeed. Also we should check the license compatibility for each of these dependencies. > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, > core-site.xml.template, Running Solr on S3.pdf > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.5-Linux (64bit/jdk1.7.0_80) - Build # 442 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-5.5-Linux/442/ Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: timed out waiting for collection1 startAt time to exceed: Wed Feb 08 16:26:22 CET 2017 Stack Trace: java.lang.AssertionError: timed out waiting for collection1 startAt time to exceed: Wed Feb 08 16:26:22 CET 2017 at __randomizedtesting.SeedInfo.seed([8E047F608AE06072:55AF7FA68FC809C1]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1518) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:854) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.schema.TestManagedSchemaAPI.test Error Message: Error from server at
[jira] [Commented] (LUCENE-7680) Never cache term filters
[ https://issues.apache.org/jira/browse/LUCENE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858153#comment-15858153 ] David Smiley commented on LUCENE-7680: -- +1 Nice documentation too. > Never cache term filters > > > Key: LUCENE-7680 > URL: https://issues.apache.org/jira/browse/LUCENE-7680 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7680.patch, LUCENE-7680.patch > > > Currently we just require term filters to be used a lot in order to cache > them. Maybe instead we should look into never caching them. This should not > hurt performance since term filters are plenty fast, and would make other > filters more likely to be cached since we would not "pollute" the history > with filters that are not worth caching. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10098) HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads
[ https://issues.apache.org/jira/browse/SOLR-10098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858148#comment-15858148 ] Mark Miller commented on SOLR-10098: I think this is because so many tests are running and some of the hdfs test stuff fires up netty each time and there seems to be some random exhaustion due to it's secure random use. {noformat} [junit4]> Throwable #1: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.hdfs.HdfsThreadLeakTest: [junit4]>1) Thread[id=112, name=initialSeedUniquifierGenerator, state=RUNNABLE, group=TGRP-HdfsThreadLeakTest] [junit4]> at java.io.FileInputStream.readBytes(Native Method) [junit4]> at java.io.FileInputStream.read(FileInputStream.java:255) [junit4]> at sun.security.provider.NativePRNG$RandomIO.readFully(NativePRNG.java:424) [junit4]> at sun.security.provider.NativePRNG$RandomIO.implGenerateSeed(NativePRNG.java:441) [junit4]> at sun.security.provider.NativePRNG$RandomIO.access$500(NativePRNG.java:331) [junit4]> at sun.security.provider.NativePRNG.engineGenerateSeed(NativePRNG.java:226) [junit4]> at java.security.SecureRandom.generateSeed(SecureRandom.java:533) [junit4]> at io.netty.util.internal.ThreadLocalRandom$1.run(ThreadLocalRandom.java:91) {noformat} > HdfsThreadLeakTest and HdfsRecoverLeaseTest can leak threads > > > Key: SOLR-10098 > URL: https://issues.apache.org/jira/browse/SOLR-10098 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: stdout > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-5.5 - Build # 8 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.5/8/ No tests ran. Build Log: [...truncated 39721 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist [copy] Copying 461 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7 [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.5/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (35.2 MB/sec) [smoker] check changes HTML... [smoker] download lucene-5.5.4-src.tgz... [smoker] 28.8 MB in 0.02 sec (1162.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.5.4.tgz... [smoker] 63.3 MB in 0.05 sec (1171.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.5.4.zip... [smoker] 73.2 MB in 0.06 sec (1152.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-5.5.4.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6190 hits for query "lucene" [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6190 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.5.4.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6190 hits for query "lucene" [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6190 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.5.4-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.7... [smoker] got 221 hits for query "lucene" [smoker] checkindex with 1.7... [smoker] generate javadocs w/ Java 7... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 221 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] Backcompat testing not required for release 6.0.1 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.0.0 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.4.1 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.4.0 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.1.0 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.2.1 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.2.0 because it's not less than 5.5.4 [smoker] Backcompat testing not required for release 6.3.0 because it's not less than 5.5.4 [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (18.2 MB/sec) [smoker] check changes HTML... [smoker] download solr-5.5.4-src.tgz... [smoker] 37.7 MB in 0.04 sec (1007.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.5.4.tgz... [smoker] 130.4 MB in 0.12 sec (1117.2 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.5.4.zip... [smoker] 138.0 MB in 0.12 sec (1117.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-5.5.4.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-5.5.4.tgz... [smoker] **WARNING**: skipping check of
Re: How would you architect solr/lucene if you were starting from scratch for them to be 10X+ faster/efficient ?
One you filter out the JIRA messages, the forum is very strong and alive. It is just very focused on its purpose - building Solr and Lucene and ElasticSearch. As to "perfection" - nothing is perfect, you can just look at the list of the open JIRAs to confirm that for Lucene and/or Solr. But there is constant improvement and ever-deepening of the features and performance improvement. You can also look at Elasticsearch for inspiration, as they build on Lucene (and are contributing to it) and had a chance to rebuild the layers above it. On your question specifically, I think it is hard to answer it well. Partially because I am not sure your assumptions are all that thought out. For example: 1) Different language than Java - Solr relies on Zookeeper, Tika and other libraries. All of those are in Java. Language change implies full change of the dependencies and ecosystem and - without looking - I doubt there is an open-source comprehensive MSWord parser in C++/Rust. 2) Algolia radix? Lucene uses pre-compiled DFA (deterministic finite automata). Are you sure the open graph chosen because Algolia wants to run on the phone is an improvement on the DFA? 3) Document distribution is already customizable with _route_ key, though obviously Maguro algorithm is beyond single key's reach. On the other hand, I am not sure Maguro is designed for good faceting, streaming, enumerations, or other features Lucene/Solr has in its core. As to the rest (GPU!, FPGA), we accept contributions. Including large, complex, interesting contributions (streams, learning to rank, docvalues, etc). And, long term, it is probably more effective to be able to innovate without the well-established framework rather than reinventing things from scratch. After all, even Twitter and LinkedIn built their internal implementations on top of Lucene rather than reinventing absolutely everything. Still, Elasticsearch had a - very successful - go at the "Innovator's Dilemma" situation. If you want to create a team trying to rebuild/improve the approaches completely from scratch, I am sure you will find a lot of us looking at your efforts with interest. I, for one, would be happy to point out a new radically-different approach to search engine implementation on my Solr Start mailing list. Regards and good luck, Alex. http://www.solr-start.com/ - Resources for Solr users, new and experienced On 8 February 2017 at 03:39, Dorian Hoxhawrote: > So, am I asking too much (maybe), is this forum dead (then where to ask ? > there is extreme noise here), is lucene perfect(of course not) ? > > > On Wed, Jan 25, 2017 at 5:01 PM, Dorian Hoxha > wrote: >> >> Was thinking also how bing doesn't use posting lists and also compiling >> queries ! >> About the queries, I would've think it wouldn't be as high overhead as >> queries in in rdbms since those apply on each row while on search they apply >> on each bitset. >> >> >> On Mon, Jan 23, 2017 at 6:04 PM, Jeff Wartes >> wrote: >>> >>> >>> >>> I’ve had some curiosity about this question too. >>> >>> >>> >>> For a while, I watched for a seastar-like library for the JVM, but >>> https://github.com/bestwpw/windmill was the only one I came across, and it >>> doesn’t seem to be going anywhere. Since one of the points of the JVM is to >>> abstract away the platform, I certainty wonder if the JVM will ever get the >>> kinds of machine affinity these other projects see. Your one-shard-per-core >>> could probably be faked with multiple JVMs and numactl - could be an >>> interesting experiment. >>> >>> >>> >>> That said, I’m aware that a phenomenal amount of optimization effort has >>> gone into Lucene, and I’d also be interested in hearing about things that >>> worked well. >>> >>> >>> >>> >>> >>> From: Dorian Hoxha >>> Reply-To: "dev@lucene.apache.org" >>> Date: Friday, January 20, 2017 at 8:12 AM >>> To: "dev@lucene.apache.org" >>> Subject: How would you architect solr/lucene if you were starting from >>> scratch for them to be 10X+ faster/efficient ? >>> >>> >>> >>> Hi friends, >>> >>> I was thinking how scylladb architecture works compared to cassandra >>> which gives them 10x+ performance and lower latency. If you were starting >>> lucene and solr from scratch what would you do to achieve something similar >>> ? >>> >>> Different language (rust/c++?) for better SIMD ? >>> >>> Use a GPU with a SSD for posting-list intersection ?(not out yet) >>> >>> Make it in-memory and use better data structures? >>> >>> Shard on cores like scylladb (so 1 shard for each core on the machine) ? >>> >>> External cache (like keeping n redis-servers with big ram/network & slow >>> cpu/disk just for cache) ?? >>> >>> Use better data structures (like algolia autocomplete radix ) >>> >>> Distributing documents by term instead of id ? >>> >>> Using ASIC / FPGA ? >>> >>> >>> >>> Regards, >>>
[jira] [Updated] (LUCENE-7680) Never cache term filters
[ https://issues.apache.org/jira/browse/LUCENE-7680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-7680: - Attachment: LUCENE-7680.patch Here is an updated patch. David, does it work better for you? > Never cache term filters > > > Key: LUCENE-7680 > URL: https://issues.apache.org/jira/browse/LUCENE-7680 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Attachments: LUCENE-7680.patch, LUCENE-7680.patch > > > Currently we just require term filters to be used a lot in order to cache > them. Maybe instead we should look into never caching them. This should not > hurt performance since term filters are plenty fast, and would make other > filters more likely to be cached since we would not "pollute" the history > with filters that are not worth caching. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
FINAL REMINDER: CFP for ApacheCon closes February 11th
Dear Apache Enthusiast, This is your FINAL reminder that the Call for Papers (CFP) for ApacheCon Miami is closing this weekend - February 11th. This is your final opportunity to submit a talk for consideration at this event. This year, we are running several mini conferences in conjunction with the main event, so if you're submitting for one of those events, please pay attention to the instructions below. Apache: Big Data * Event information: http://events.linuxfoundation.org/events/apache-big-data-north-america * CFP: http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp Apache: IoT (Internet of Things) * Event Information: http://us.apacheiot.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'IoT' in the Target Audience field) CloudStack Collaboration Conference * Event information: http://us.cloudstackcollab.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'CloudStack' in the Target Audience field) FlexJS Summit * Event information - http://us.apacheflexjs.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'Flex' in the Target Audience field) TomcatCon * Event information - https://tomcat.apache.org/conference.html * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'Tomcat' in the Target Audience field) All other topics and projects * Event information - http://events.linuxfoundation.org/events/apachecon-north-america/program/about * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp Admission to any of these events also grants you access to all of the others. Thanks, and we look forward to seeing you in Miami! -- Rich Bowen VP Conferences, Apache Software Foundation rbo...@apache.org Twitter: @apachecon (You are receiving this email because you are subscribed to a dev@ or users@ list of some Apache Software Foundation project. If you do not wish to receive email from these lists any more, you must follow that list's unsubscription procedure. View the headers of this message for unsubscription instructions.)
FINAL REMINDER: CFP for ApacheCon closes February 11th
Dear Apache Enthusiast, This is your FINAL reminder that the Call for Papers (CFP) for ApacheCon Miami is closing this weekend - February 11th. This is your final opportunity to submit a talk for consideration at this event. This year, we are running several mini conferences in conjunction with the main event, so if you're submitting for one of those events, please pay attention to the instructions below. Apache: Big Data * Event information: http://events.linuxfoundation.org/events/apache-big-data-north-america * CFP: http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp Apache: IoT (Internet of Things) * Event Information: http://us.apacheiot.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'IoT' in the Target Audience field) CloudStack Collaboration Conference * Event information: http://us.cloudstackcollab.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'CloudStack' in the Target Audience field) FlexJS Summit * Event information - http://us.apacheflexjs.org/ * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'Flex' in the Target Audience field) TomcatCon * Event information - https://tomcat.apache.org/conference.html * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp (Indicate 'Tomcat' in the Target Audience field) All other topics and projects * Event information - http://events.linuxfoundation.org/events/apachecon-north-america/program/about * CFP - http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp Admission to any of these events also grants you access to all of the others. Thanks, and we look forward to seeing you in Miami! -- Rich Bowen VP Conferences, Apache Software Foundation rbo...@apache.org Twitter: @apachecon (You are receiving this email because you are subscribed to a dev@ or users@ list of some Apache Software Foundation project. If you do not wish to receive email from these lists any more, you must follow that list's unsubscription procedure. View the headers of this message for unsubscription instructions.) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs
[ https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15858019#comment-15858019 ] Noble Paul commented on SOLR-8029: -- sure [~romseygeek] . I'll fix it > Modernize and standardize Solr APIs > --- > > Key: SOLR-8029 > URL: https://issues.apache.org/jira/browse/SOLR-8029 > Project: Solr > Issue Type: Improvement >Affects Versions: 6.0 >Reporter: Noble Paul >Assignee: Noble Paul > Labels: API, EaseOfUse > Fix For: 6.0 > > Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, > SOLR-8029.patch, SOLR-8029.patch > > > Solr APIs have organically evolved and they are sometimes inconsistent with > each other or not in sync with the widely followed conventions of HTTP > protocol. Trying to make incremental changes to make them modern is like > applying band-aid. So, we have done a complete rethink of what the APIs > should be. The most notable aspects of the API are as follows: > The new set of APIs will be placed under a new path {{/solr2}}. The legacy > APIs will continue to work under the {{/solr}} path as they used to and they > will be eventually deprecated. > There are 4 types of requests in the new API > * {{/v2//*}} : Hit a collection directly or manage > collections/shards/replicas > * {{/v2//*}} : Hit a core directly or manage cores > * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection > or core. e.g: security, overseer ops etc > This will be released as part of a major release. Check the link given below > for the full specification. Your comments are welcome > [Solr API version 2 Specification | http://bit.ly/1JYsBMQ] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_121) - Build # 2817 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2817/ Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([BCCED55663FB19AE:79D811CD734D21CE]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12201 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J1/temp/solr.handler.admin.MBeansHandlerTest_BCCED55663FB19AE-001/init-core-data-001
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 655 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/655/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest Error Message: expected:<0> but was:<5> Stack Trace: java.lang.AssertionError: expected:<0> but was:<5> at __randomizedtesting.SeedInfo.seed([84C64433359F72D9:F036A57671BB3556]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest(TestCloudRecovery.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message:
[jira] [Updated] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Suprun updated SOLR-9952: Attachment: core-site.xml.template > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, > core-site.xml.template, Running Solr on S3.pdf > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Suprun updated SOLR-9952: Attachment: 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch For test launch you should set properties: solr.s3.bucket.name=s3n:/// - it is important to end with slash solr.s3.confdir=. I attached template site-core.xml > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 6 - Still Failing
Thanks Mike! Le mer. 8 févr. 2017 à 13:03, Michael McCandlessa écrit : > I pushed a fix. > > Mike McCandless > > http://blog.mikemccandless.com > > > On Wed, Feb 8, 2017 at 6:44 AM, Apache Jenkins Server > wrote: > > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/6/ > > > > No tests ran. > > > > Build Log: > > [...truncated 95 lines...] > > [javac] Compiling 739 source files to > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build/core/classes/java > > [javac] > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/core/src/java/org/apache/lucene/codecs/MultiLevelSkipListReader.java:157: > error: cannot find symbol > > [javac] if (Integer.compareUnsigned(numSkipped[level], docCount) > > 0) { > > [javac]^ > > [javac] symbol: method compareUnsigned(int,int) > > [javac] location: class Integer > > [javac] Note: Some input files use or override a deprecated API. > > [javac] Note: Recompile with -Xlint:deprecation for details. > > [javac] 1 error > > > > BUILD FAILED > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:757: > The following error occurred while executing this line: > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:694: > The following error occurred while executing this line: > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:59: > The following error occurred while executing this line: > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build.xml:50: > The following error occurred while executing this line: > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:546: > The following error occurred while executing this line: > > > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:1989: > Compile failed; see the compiler error output for details. > > > > Total time: 7 seconds > > Build step 'Invoke Ant' marked build as failure > > Archiving artifacts > > No prior successful build to compare, so performing full copy of > artifacts > > Recording test results > > ERROR: Step ‘Publish JUnit test result report’ failed: No test report > files were found. Configuration error? > > Email was triggered for: Failure - Any > > Sending email for trigger: Failure - Any > > > > > > > > > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[jira] [Updated] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Suprun updated SOLR-9952: Attachment: (was: SOLR-9952.patch) > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev > Attachments: Running Solr on S3.pdf > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 6 - Still Failing
I pushed a fix. Mike McCandless http://blog.mikemccandless.com On Wed, Feb 8, 2017 at 6:44 AM, Apache Jenkins Serverwrote: > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/6/ > > No tests ran. > > Build Log: > [...truncated 95 lines...] > [javac] Compiling 739 source files to > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build/core/classes/java > [javac] > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/core/src/java/org/apache/lucene/codecs/MultiLevelSkipListReader.java:157: > error: cannot find symbol > [javac] if (Integer.compareUnsigned(numSkipped[level], docCount) > 0) > { > [javac]^ > [javac] symbol: method compareUnsigned(int,int) > [javac] location: class Integer > [javac] Note: Some input files use or override a deprecated API. > [javac] Note: Recompile with -Xlint:deprecation for details. > [javac] 1 error > > BUILD FAILED > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:757: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:694: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:59: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build.xml:50: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:546: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:1989: > Compile failed; see the compiler error output for details. > > Total time: 7 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > No prior successful build to compare, so performing full copy of artifacts > Recording test results > ERROR: Step ‘Publish JUnit test result report’ failed: No test report files > were found. Configuration error? > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.5 - Build # 6 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.5/6/ No tests ran. Build Log: [...truncated 95 lines...] [javac] Compiling 739 source files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build/core/classes/java [javac] /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/core/src/java/org/apache/lucene/codecs/MultiLevelSkipListReader.java:157: error: cannot find symbol [javac] if (Integer.compareUnsigned(numSkipped[level], docCount) > 0) { [javac]^ [javac] symbol: method compareUnsigned(int,int) [javac] location: class Integer [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] 1 error BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:757: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:694: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/build.xml:59: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/build.xml:50: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:546: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.5/checkout/lucene/common-build.xml:1989: Compile failed; see the compiler error output for details. Total time: 7 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts No prior successful build to compare, so performing full copy of artifacts Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1119 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1119/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: timeout waiting to see all nodes active Stack Trace: java.lang.AssertionError: timeout waiting to see all nodes active at __randomizedtesting.SeedInfo.seed([1C1DFEE49C2FFB38:9449C13E32D396C0]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277) at org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[JENKINS] Lucene-Solr-Tests-5.5 - Build # 12 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.5/12/ No tests ran. Build Log: [...truncated 112 lines...] [javac] Compiling 739 source files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/lucene/build/core/classes/java [javac] /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/lucene/core/src/java/org/apache/lucene/codecs/MultiLevelSkipListReader.java:157: error: cannot find symbol [javac] if (Integer.compareUnsigned(numSkipped[level], docCount) > 0) { [javac]^ [javac] symbol: method compareUnsigned(int,int) [javac] location: class Integer [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] 1 error BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/build.xml:750: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/build.xml:694: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/build.xml:59: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/lucene/build.xml:50: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/lucene/common-build.xml:546: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.5/lucene/common-build.xml:1989: Compile failed; see the compiler error output for details. Total time: 8 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Recording test results ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 254 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/254/ No tests ran. Build Log: [...truncated 41956 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 260 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (22.2 MB/sec) [smoker] check changes HTML... [smoker] download lucene-6.5.0-src.tgz... [smoker] 30.7 MB in 0.03 sec (926.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.5.0.tgz... [smoker] 65.2 MB in 0.07 sec (957.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.5.0.zip... [smoker] 75.6 MB in 0.07 sec (1010.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-6.5.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6228 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.5.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6228 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.5.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 230 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.00 sec (247.0 MB/sec) [smoker] check changes HTML... [smoker] download solr-6.5.0-src.tgz... [smoker] 40.3 MB in 0.04 sec (1043.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.5.0.tgz... [smoker] 140.8 MB in 0.13 sec (1050.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.5.0.zip... [smoker] 149.9 MB in 0.14 sec (1088.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-6.5.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-6.5.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8 [smoker] Creating Solr home directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] bin/solr start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|] [/] [-] [\] [smoker] Started Solr server on port 8983 (pid=16014). Happy searching! [smoker] [smoker]
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+155) - Build # 18924 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18924/ Java: 32bit/jdk-9-ea+155 -client -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard Error Message: Expected 5 slices to be active null Last available state: DocCollection(solrj_test_splitshard//collections/solrj_test_splitshard/state.json/26)={ "replicationFactor":"1", "shards":{ "shard1":{ "range":"8000-", "state":"inactive", "replicas":{"core_node2":{ "core":"solrj_test_splitshard_shard1_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"active", "replicas":{"core_node1":{ "core":"solrj_test_splitshard_shard2_replica1", "base_url":"https://127.0.0.1:42530/solr;, "node_name":"127.0.0.1:42530_solr", "state":"active", "leader":"true"}}}, "shard1_0":{ "range":"8000-95dd", "state":"active", "replicas":{"core_node3":{ "core":"solrj_test_splitshard_shard1_0_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard1_1":{ "range":"95de-95de", "state":"active", "replicas":{"core_node4":{ "core":"solrj_test_splitshard_shard1_1_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard1_2":{ "range":"95df-", "state":"active", "replicas":{"core_node5":{ "core":"solrj_test_splitshard_shard1_2_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} Stack Trace: java.lang.AssertionError: Expected 5 slices to be active null Last available state: DocCollection(solrj_test_splitshard//collections/solrj_test_splitshard/state.json/26)={ "replicationFactor":"1", "shards":{ "shard1":{ "range":"8000-", "state":"inactive", "replicas":{"core_node2":{ "core":"solrj_test_splitshard_shard1_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"active", "replicas":{"core_node1":{ "core":"solrj_test_splitshard_shard2_replica1", "base_url":"https://127.0.0.1:42530/solr;, "node_name":"127.0.0.1:42530_solr", "state":"active", "leader":"true"}}}, "shard1_0":{ "range":"8000-95dd", "state":"active", "replicas":{"core_node3":{ "core":"solrj_test_splitshard_shard1_0_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard1_1":{ "range":"95de-95de", "state":"active", "replicas":{"core_node4":{ "core":"solrj_test_splitshard_shard1_1_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true"}}}, "shard1_2":{ "range":"95df-", "state":"active", "replicas":{"core_node5":{ "core":"solrj_test_splitshard_shard1_2_replica1", "base_url":"https://127.0.0.1:45689/solr;, "node_name":"127.0.0.1:45689_solr", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} at __randomizedtesting.SeedInfo.seed([4460F2037148B3A2:9F6A5F6F6FBD8F1D]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265) at org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard(CollectionsAPISolrJTest.java:172) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at
[jira] [Commented] (LUCENE-7662) Index with missing files should throw CorruptIndexException
[ https://issues.apache.org/jira/browse/LUCENE-7662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857826#comment-15857826 ] Michael McCandless commented on LUCENE-7662: Thanks [~mdrob]; I think this patch looks good, except it makes some tests angry, e.g.: {noformat} [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLucene62SegmentInfoFormat -Dtests.method=testRandomExceptions -Dtests.seed=F65CD1D4D104665D -Dtests.locale=zh -Dtests.timezone=Asia/Khandyga -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] ERROR 0.03s J3 | TestLucene62SegmentInfoFormat.testRandomExceptions <<< [junit4]> Throwable #1: org.apache.lucene.index.CorruptIndexException: Problem reading index. (resource=a random IOException (_e.cfe)) [junit4]>at __randomizedtesting.SeedInfo.seed([F65CD1D4D104665D:9E73BF104F0A3FFD]:0) [junit4]>at org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:142) [junit4]>at org.apache.lucene.index.SegmentReader.(SegmentReader.java:74) [junit4]>at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:143) [junit4]>at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:195) [junit4]>at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:103) [junit4]>at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:473) [junit4]>at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103) [junit4]>at org.apache.lucene.index.BaseIndexFileFormatTestCase.testRandomExceptions(BaseIndexFileFormatTestCase.java:563) [junit4]>at org.apache.lucene.index.BaseSegmentInfoFormatTestCase.testRandomExceptions(BaseSegmentInfoFormatTestCase.java:50) [junit4]>at java.lang.Thread.run(Thread.java:745) [junit4]> Caused by: java.nio.file.NoSuchFileException: a random IOException (_e.cfe) [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575) [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744) [junit4]>at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:119) [junit4]>at org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072) [junit4]>at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.readEntries(Lucene50CompoundReader.java:105) [junit4]>at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.(Lucene50CompoundReader.java:69) [junit4]>at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4]>at org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:99) [junit4]>... 44 more {noformat} Maybe we just need to relax that base test case to accept the new {{CorruptIndexExcpeption}} as well, and look to its cause to check the exception message? Also, I think it'd be a bit better to use our {{expectThrows}} method in the test case, wrapped around the one line where you try to open an index reader, instead of the {{@Test(expected = ...)}}, which would pass if {{CorruptIndexException}} was hit anywhere in that test case? > Index with missing files should throw CorruptIndexException > --- > > Key: LUCENE-7662 > URL: https://issues.apache.org/jira/browse/LUCENE-7662 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 6.4 >Reporter: Mike Drob > Attachments: LUCENE-7662.patch > > > Similar to what we did in LUCENE-7592 for EOF, we should catch missing files > and rethrow those as CorruptIndexException. > If a particular codec can handle missing files, it should be proactive check > for those optional files and not throw anything, so I think we can safely do > this at SegmentReader or SegmentCoreReaders level. > Stack trace copied from SOLR-10006: > {noformat} > Caused by: java.nio.file.NoSuchFileException: > /Users/Erick/apache/solrVersions/trunk/solr/example/cloud/node3/solr/eoe_shard1_replica1/data/index/_1_Lucene50_0.doc > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) > at java.nio.channels.FileChannel.open(FileChannel.java:287) > at
[jira] [Commented] (LUCENE-7440) Document skipping on large indexes is broken
[ https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857817#comment-15857817 ] Adrien Grand commented on LUCENE-7440: -- For the record, I verified that Test2BPostings passes on the 5.5 branch. > Document skipping on large indexes is broken > > > Key: LUCENE-7440 > URL: https://issues.apache.org/jira/browse/LUCENE-7440 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 2.2 >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Critical > Fix For: master (7.0), 6.3, 5.5.4, 6.2.1 > > Attachments: LUCENE-7440.patch, LUCENE-7440.patch > > > Large skips on large indexes fail. > Anything that uses skips (such as a boolean query, filtered queries, faceted > queries, join queries, etc) can trigger this bug on a sufficiently large > index. > The bug is a numeric overflow in MultiLevelSkipList that has been present > since inception (Lucene 2.2). It may not manifest until one has a single > segment with more than ~1.8B documents, and a large skip is performed on that > segment. > Typical stack trace on Lucene7-dev: > {code} > java.lang.ArrayIndexOutOfBoundsException: 110 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125) > at > org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421) > at YCS_skip7$1.testSkip(YCS_skip7.java:307) > {code} > Typical stack trace on Lucene4.10.3: > {code} > 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: > null:java.lang.ArrayIndexOutOfBoundsException: 75 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122) > at > org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506) > at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85) > [...] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) > [...] > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7570) Tragic events during merges can lead to deadlock
[ https://issues.apache.org/jira/browse/LUCENE-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857815#comment-15857815 ] ASF subversion and git services commented on LUCENE-7570: - Commit 7a9b568bda29b74333bfb74c7420b4511562253f in lucene-solr's branch refs/heads/branch_5_5 from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a9b568 ] LUCENE-7570: fix IndexWriter deadlock when a tragic merge exception is hit while too many merges are running > Tragic events during merges can lead to deadlock > > > Key: LUCENE-7570 > URL: https://issues.apache.org/jira/browse/LUCENE-7570 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 5.5, master (7.0) >Reporter: Joey Echeverria >Assignee: Michael McCandless > Fix For: master (7.0), 5.5.4, 6.4 > > Attachments: LUCENE-7570.patch, thread_dump.txt > > > When an {{IndexWriter#commit()}} is stalled due to too many pending merges, > you can get a deadlock if the currently active merge thread hits a tragic > event. > # The thread performing the commit synchronizes on the the {{commitLock}} in > {{commitInternal}}. > # The thread goes on to to call {{ConcurrentMergeScheduler#doStall()}} which > {{waits()}} on the {{ConcurrentMergeScheduler}} object. This release the > merge scheduler's monitor lock, but not the {{commitLock}} in {{IndexWriter}}. > # Sometime after this wait begins, the merge thread gets a tragic exception > can calls {{IndexWriter#tragicEvent()}} which in turn calls > {{IndexWriter#rollbackInternal()}}. > # The {{IndexWriter#rollbackInternal()}} synchronizes on the {{commitLock}} > which is still held by the committing thread from (1) above which is waiting > on the merge(s) to complete. Hence, deadlock. > We hit this bug with Lucene 5.5, but I looked at the code in the master > branch and it looks like the deadlock still exists there as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7570) Tragic events during merges can lead to deadlock
[ https://issues.apache.org/jira/browse/LUCENE-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-7570. Resolution: Fixed Fix Version/s: 5.5.4 > Tragic events during merges can lead to deadlock > > > Key: LUCENE-7570 > URL: https://issues.apache.org/jira/browse/LUCENE-7570 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 5.5, master (7.0) >Reporter: Joey Echeverria >Assignee: Michael McCandless > Fix For: master (7.0), 5.5.4, 6.4 > > Attachments: LUCENE-7570.patch, thread_dump.txt > > > When an {{IndexWriter#commit()}} is stalled due to too many pending merges, > you can get a deadlock if the currently active merge thread hits a tragic > event. > # The thread performing the commit synchronizes on the the {{commitLock}} in > {{commitInternal}}. > # The thread goes on to to call {{ConcurrentMergeScheduler#doStall()}} which > {{waits()}} on the {{ConcurrentMergeScheduler}} object. This release the > merge scheduler's monitor lock, but not the {{commitLock}} in {{IndexWriter}}. > # Sometime after this wait begins, the merge thread gets a tragic exception > can calls {{IndexWriter#tragicEvent()}} which in turn calls > {{IndexWriter#rollbackInternal()}}. > # The {{IndexWriter#rollbackInternal()}} synchronizes on the {{commitLock}} > which is still held by the committing thread from (1) above which is waiting > on the merge(s) to complete. Hence, deadlock. > We hit this bug with Lucene 5.5, but I looked at the code in the master > branch and it looks like the deadlock still exists there as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-7440) Document skipping on large indexes is broken
[ https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand reopened LUCENE-7440: -- Reopen for backport to 5.5.4. > Document skipping on large indexes is broken > > > Key: LUCENE-7440 > URL: https://issues.apache.org/jira/browse/LUCENE-7440 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 2.2 >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Critical > Fix For: master (7.0), 6.3, 5.5.4, 6.2.1 > > Attachments: LUCENE-7440.patch, LUCENE-7440.patch > > > Large skips on large indexes fail. > Anything that uses skips (such as a boolean query, filtered queries, faceted > queries, join queries, etc) can trigger this bug on a sufficiently large > index. > The bug is a numeric overflow in MultiLevelSkipList that has been present > since inception (Lucene 2.2). It may not manifest until one has a single > segment with more than ~1.8B documents, and a large skip is performed on that > segment. > Typical stack trace on Lucene7-dev: > {code} > java.lang.ArrayIndexOutOfBoundsException: 110 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125) > at > org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421) > at YCS_skip7$1.testSkip(YCS_skip7.java:307) > {code} > Typical stack trace on Lucene4.10.3: > {code} > 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: > null:java.lang.ArrayIndexOutOfBoundsException: 75 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122) > at > org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506) > at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85) > [...] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) > [...] > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7440) Document skipping on large indexes is broken
[ https://issues.apache.org/jira/browse/LUCENE-7440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-7440: - Fix Version/s: 5.5.4 > Document skipping on large indexes is broken > > > Key: LUCENE-7440 > URL: https://issues.apache.org/jira/browse/LUCENE-7440 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 2.2 >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Critical > Fix For: master (7.0), 6.3, 5.5.4, 6.2.1 > > Attachments: LUCENE-7440.patch, LUCENE-7440.patch > > > Large skips on large indexes fail. > Anything that uses skips (such as a boolean query, filtered queries, faceted > queries, join queries, etc) can trigger this bug on a sufficiently large > index. > The bug is a numeric overflow in MultiLevelSkipList that has been present > since inception (Lucene 2.2). It may not manifest until one has a single > segment with more than ~1.8B documents, and a large skip is performed on that > segment. > Typical stack trace on Lucene7-dev: > {code} > java.lang.ArrayIndexOutOfBoundsException: 110 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:297) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:125) > at > org.apache.lucene.codecs.lucene50.Lucene50SkipReader.readSkipData(Lucene50SkipReader.java:180) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:163) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:133) > at > org.apache.lucene.codecs.lucene50.Lucene50PostingsReader$BlockDocsEnum.advance(Lucene50PostingsReader.java:421) > at YCS_skip7$1.testSkip(YCS_skip7.java:307) > {code} > Typical stack trace on Lucene4.10.3: > {code} > 6-08-31 18:57:17,460 ERROR org.apache.solr.servlet.SolrDispatchFilter: > null:java.lang.ArrayIndexOutOfBoundsException: 75 > at > org.apache.lucene.codecs.MultiLevelSkipListReader$SkipBuffer.readByte(MultiLevelSkipListReader.java:301) > at org.apache.lucene.store.DataInput.readVInt(DataInput.java:122) > at > org.apache.lucene.codecs.lucene41.Lucene41SkipReader.readSkipData(Lucene41SkipReader.java:194) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.loadNextSkip(MultiLevelSkipListReader.java:168) > at > org.apache.lucene.codecs.MultiLevelSkipListReader.skipTo(MultiLevelSkipListReader.java:138) > at > org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsEnum.advance(Lucene41PostingsReader.java:506) > at org.apache.lucene.search.TermScorer.advance(TermScorer.java:85) > [...] > at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) > [...] > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2004) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org