[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563816#comment-14563816 ] Robert Muir commented on LUCENE-6508: - There is no hurry. I will prototype some stuff and see where I get. This is exactly why i moved this out of LUCENE-6507, so we can take our time and make this work better in the future. And that is separate from fixing completely broken bugs like LUCENE-6507 which are really release blockers :) Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563830#comment-14563830 ] Uwe Schindler commented on LUCENE-6508: --- I may have been to aggressive, sorry for that. I am fine with the given proposals, my intent is just to have it better this time - it was a large step forward last September to remove the ability to have LockFactory and Directory work on different directories at all. I already removed lots of outdated APIs like forcefully unlocking. Also in this issue, I still want to keep the LockFactory and Directory separation alive, because it allows to configure this much better. But I know we agree on this :-) I will help on that issue, maybe we should open a branch? I knew from last time that this was a horrible amount of code to touch... Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563837#comment-14563837 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682352 from [~rcmuir] in branch 'dev/trunk' [ https://svn.apache.org/r1682352 ] LUCENE-6507: fix test bug to not double-obtain. testDoubleObtain already tests that NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563844#comment-14563844 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682353 from [~rcmuir] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682353 ] LUCENE-6507: fix test bug to not double-obtain. testDoubleObtain already tests that NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563852#comment-14563852 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682354 from [~rcmuir] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1682354 ] LUCENE-6507: fix test bug to not double-obtain. testDoubleObtain already tests that NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563862#comment-14563862 ] Robert Muir commented on LUCENE-6508: - Agreed, I will make a branch, it will take time. I still want to improve tests around this as well. Some new tests from LUCENE-6507 are better, but to me its still not ideal. I think we should move to a kind of BaseLFTestCase like our other important classes and each impl has a subclass with additional tests for its own pecularities. Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 861 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/861/ 1 tests failed. REGRESSION: org.apache.solr.cloud.TestCloudPivotFacet.test Error Message: init query failed: {main(facet=truefacet.pivot=%7B%21stats%3Dst3%7Dpivot_dfacet.pivot=dense_pivot_ti%2Cdense_pivot_i%2Cpivot_b1facet.limit=13facet.offset=6facet.pivot.mincount=188),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+894%5Dstats=truestats.field=%7B%21key%3Dsk1+tag%3Dst1%2Cst2%7Dpivot_tlstats.field=%7B%21key%3Dsk2+tag%3Dst2%2Cst3%7Dpivot_i1stats.field=%7B%21key%3Dsk3+tag%3Dst3%2Cst4%7Dpivot_z_s_test_min=188)}: No live SolrServers available to handle this request:[http://127.0.0.1:58686/collection1, http://127.0.0.1:58613/collection1, http://127.0.0.1:42129/collection1, http://127.0.0.1:35398/collection1] Stack Trace: java.lang.RuntimeException: init query failed: {main(facet=truefacet.pivot=%7B%21stats%3Dst3%7Dpivot_dfacet.pivot=dense_pivot_ti%2Cdense_pivot_i%2Cpivot_b1facet.limit=13facet.offset=6facet.pivot.mincount=188),extra(rows=0q=*%3A*fq=id%3A%5B*+TO+894%5Dstats=truestats.field=%7B%21key%3Dsk1+tag%3Dst1%2Cst2%7Dpivot_tlstats.field=%7B%21key%3Dsk2+tag%3Dst2%2Cst3%7Dpivot_i1stats.field=%7B%21key%3Dsk3+tag%3Dst3%2Cst4%7Dpivot_z_s_test_min=188)}: No live SolrServers available to handle this request:[http://127.0.0.1:58686/collection1, http://127.0.0.1:58613/collection1, http://127.0.0.1:42129/collection1, http://127.0.0.1:35398/collection1] at __randomizedtesting.SeedInfo.seed([22F85D14F0CCB183:AAAC62CE5E30DC7B]:0) at org.apache.solr.cloud.TestCloudPivotFacet.assertPivotCountsAreCorrect(TestCloudPivotFacet.java:254) at org.apache.solr.cloud.TestCloudPivotFacet.test(TestCloudPivotFacet.java:228) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563869#comment-14563869 ] Mark Miller commented on LUCENE-6507: - bq. I can reproduce the same issue too. Hit this while creating the RC. Just a change in API behavior. Previously a double obtain was returning false and now it's throwing an exception. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] 5.2.0 RC1
+1 SUCCESS! [2:28:30.356211] On Thu, May 28, 2015 at 11:12 PM, Anshum Gupta ans...@anshumgupta.net wrote: +1 Mike. On Thu, May 28, 2015 at 10:24 AM, Michael McCandless luc...@mikemccandless.com wrote: If we are going to respin I'd like to backport LUCENE-6505 too... Mike McCandless http://blog.mikemccandless.com On Thu, May 28, 2015 at 1:07 PM, Anshum Gupta ans...@anshumgupta.net wrote: Sure, I'll re-spin once you get it into the branch. Thanks for fixing this! On Thu, May 28, 2015 at 7:16 AM, Robert Muir rcm...@gmail.com wrote: I think we should respin due to https://issues.apache.org/jira/browse/LUCENE-6507. NativeFSLockFactory has race conditions, which can cause valid locks to become invalidated by another thread in some situations. We already have a test + fix but JIRA is extremely slow and the issue needs more review and testing on different operating systems. On Wed, May 27, 2015 at 4:04 PM, Anshum Gupta ans...@anshumgupta.net wrote: Please vote for the first release candidate for Lucene/Solr 5.2.0 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 I've run the test suite a few times, the smoke tester, basic collection creation, startup, indexing, and query. Here's my +1. SUCCESS! [0:31:23.943785] P.S: I hit failure in MultiThreadedOCPTest 2 times while creating the RC, so I'm looking at what's triggering it in parallel to make sure that we're not overlooking a problem. As it has been failing on Jenkins frequently, I've created SOLR-7602 to track this. -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563625#comment-14563625 ] ASF subversion and git services commented on SOLR-7602: --- Commit 1682323 from [~anshumg] in branch 'dev/trunk' [ https://svn.apache.org/r1682323 ] SOLR-7602: Check if SolrCore object is already closed before trying to close it in case of an exception during Core creation. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563672#comment-14563672 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682335 from [~mikemccand] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1682335 ] LUCENE-6507: don't let NativeFSLock.close release other locks NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563790#comment-14563790 ] Steve Rowe commented on LUCENE-6507: I see an HdfsLockFactoryTest failure on 5.2 after this commit: [http://jenkins.sarowe.net/job/Lucene-Solr-tests-5.2-Java8/3/]. {noformat} [junit4] Suite: org.apache.solr.store.hdfs.HdfsLockFactoryTest [junit4] 2 Creating dataDir: /var/lib/jenkins/jobs/Lucene-Solr-tests-5.2-Java8/workspace/solr/build/solr-core/test/J5/temp/solr.store.hdfs.HdfsLockFactoryTest B48BC404BF6BB3F1-001/init-core-data-001 [junit4] 2 123149 T2061 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl (false) and clientAuth (false) [junit4] 2 124356 T2061 oahu.NativeCodeLoader.clinit WARN Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [junit4] 1 Formatting using clusterid: testClusterID [junit4] 2 125343 T2061 oahmi.MetricsConfig.loadFirst WARN Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties [junit4] 2 125705 T2061 oml.Slf4jLog.info Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [junit4] 2 125714 T2061 oahh.HttpRequestLog.getRequestLog WARN Jetty request log can only be enabled using Log4j [junit4] 2 126082 T2061 oml.Slf4jLog.info jetty-6.1.26 [junit4] 2 126200 T2061 oml.Slf4jLog.info Extract jar:file:/var/lib/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/hdfs to ./temp/Jetty_localhost_45038_hdfsjfnzfi/webapp [junit4] 2 126577 T2061 oml.Slf4jLog.info NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet [junit4] 2 127704 T2061 oml.Slf4jLog.info Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45038 [junit4] 2 129764 T2061 oahh.HttpRequestLog.getRequestLog WARN Jetty request log can only be enabled using Log4j [junit4] 2 129777 T2061 oml.Slf4jLog.info jetty-6.1.26 [junit4] 2 129841 T2061 oml.Slf4jLog.info Extract jar:file:/var/lib/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/datanode to ./temp/Jetty_localhost_60765_datanodefxr31f/webapp [junit4] 2 130228 T2061 oml.Slf4jLog.info NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet [junit4] 2 131028 T2061 oml.Slf4jLog.info Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:60765 [junit4] 2 131767 T2061 oahh.HttpRequestLog.getRequestLog WARN Jetty request log can only be enabled using Log4j [junit4] 2 131769 T2061 oml.Slf4jLog.info jetty-6.1.26 [junit4] 2 131799 T2061 oml.Slf4jLog.info Extract jar:file:/var/lib/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/datanode to ./temp/Jetty_localhost_45594_datanodexb8eu/webapp [junit4] 2 132058 T2061 oml.Slf4jLog.info NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet [junit4] 2 132856 T2061 oml.Slf4jLog.info Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45594 [junit4] 2 135493 T2088 oahhsb.BlockManager.processReport BLOCK* processReport: from storage DS-c7286d21-0c75-425a-b32a-cda888b89811 node DatanodeRegistration(127.0.0.1, datanodeUuid=183b04a7-dc21-4d3e-ad8c-52569c645c0d, infoPort=60765, ipcPort=36559, storageInfo=lv=-56;cid=testClusterID;nsid=349984326;c=0), blocks: 0, hasStaleStorages: true, processing time: 2 msecs [junit4] 2 135501 T2088 oahhsb.BlockManager.processReport BLOCK* processReport: from storage DS-2cbc54c3-5182-44c8-b80a-a4af3d3bea02 node DatanodeRegistration(127.0.0.1, datanodeUuid=183b04a7-dc21-4d3e-ad8c-52569c645c0d, infoPort=60765, ipcPort=36559, storageInfo=lv=-56;cid=testClusterID;nsid=349984326;c=0), blocks: 0, hasStaleStorages: false, processing time: 0 msecs [junit4] 2 135493 T2097 oahhsb.BlockManager.processReport BLOCK* processReport: from storage DS-3c1bd938-8f62-4608-a6d4-63f576e970cd node DatanodeRegistration(127.0.0.1, datanodeUuid=3e441a70-defc-4b2f-bf7f-95c351d97a39, infoPort=45594, ipcPort=58299, storageInfo=lv=-56;cid=testClusterID;nsid=349984326;c=0), blocks: 0, hasStaleStorages: true, processing time: 1 msecs [junit4] 2 135511 T2097 oahhsb.BlockManager.processReport BLOCK* processReport: from storage DS-7b0f0027-583c-462f-9b9c-e6f13ca8a160 node DatanodeRegistration(127.0.0.1, datanodeUuid=3e441a70-defc-4b2f-bf7f-95c351d97a39, infoPort=45594, ipcPort=58299, storageInfo=lv=-56;cid=testClusterID;nsid=349984326;c=0), blocks: 0, hasStaleStorages: false, processing time: 0 msecs [junit4] 2 135786 T2061 oas.SolrTestCaseJ4.setUp ###Starting testBasic [junit4] 2 135956 T2061 oassh.HdfsDirectory.init WARN The NameNode is in SafeMode - Solr will wait 5 seconds and try again. [junit4] 2 141161 T2061
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563874#comment-14563874 ] Robert Muir commented on LUCENE-6507: - I wouldnt go that far: it was discussed here, see the comments above. Any code doing this is really broken/stupid (example: the test in question). There is not a use-case. Previously you already had to be prepared for obtain() to throw IOException anyway for other stupid cases (the test did not do this), so its not a problem that we detect this and give you a helpful exception that your code is broken. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6504) implement norms with random access API
[ https://issues.apache.org/jira/browse/LUCENE-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563675#comment-14563675 ] Adrien Grand commented on LUCENE-6504: -- +1 implement norms with random access API -- Key: LUCENE-6504 URL: https://issues.apache.org/jira/browse/LUCENE-6504 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Attachments: LUCENE-6504.patch We added this api in LUCENE-5729 but we never explored implementing norms with it. These are generally the largest consumer of heap memory and often a real hassle for users. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563757#comment-14563757 ] ASF subversion and git services commented on SOLR-7602: --- Commit 1682346 from [~anshumg] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1682346 ] SOLR-7602: Check if SolrCore object is already closed before trying to close it in case of an exception during Core creation.(merge from branch_5x) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563792#comment-14563792 ] Steve Rowe commented on LUCENE-6507: Also the seed repros for me, on OS X. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563793#comment-14563793 ] Uwe Schindler commented on LUCENE-6508: --- In LockFactory we need the following: BasDirectory.makeLock() currently delegates directly to the LockFactory (its a final method). So we should rename this method in LockFactory, too and make it return a lock only after aquire. Therefore, the LockFactory would do what Robert proposed. Otherwise I like the proposal. I will work the next days on it (I already started to rename some stuff). The lock cannot be completely immutable. Because the Closeable interface should still be implemented correctly: close() must be idempotent, so we still need the state. But it is immutable in that sense that you cannot re-obtain the lock. Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563878#comment-14563878 ] Mark Miller commented on LUCENE-6507: - Nope, I stand by that assessment :) NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563637#comment-14563637 ] Michael McCandless commented on LUCENE-6507: Thanks guys, I'll commit backport... NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563649#comment-14563649 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682329 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682329 ] LUCENE-6507: don't let NativeFSLock.close release other locks NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563690#comment-14563690 ] Mikhail Khludnev commented on LUCENE-6512: -- bq. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr automatically add the parent field (it only adds a _root_ field automatically, maybe it should also add a _parent_ field automatically). There is SOLR-5211, but I can't propose a viable way. Do you mean to add {_parent_=true} and {_root_=PK} by default always? without any killswitch? ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563697#comment-14563697 ] ASF subversion and git services commented on SOLR-7602: --- Commit 1682336 from [~anshumg] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682336 ] SOLR-7602: Check if SolrCore object is already closed before trying to close it in case of an exception during Core creation.(merge from trunk) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-6512: - Comment: was deleted (was: bq. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr automatically add the parent field (it only adds a _root_ field automatically, maybe it should also add a _parent_ field automatically). There is SOLR-5211, but I can't propose a viable way. Do you mean to add {_parent_=true} and {_root_=PK} by default always? without any killswitch? ) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality
[jira] [Commented] (SOLR-7599) Remove cruft from SolrCloud tests
[ https://issues.apache.org/jira/browse/SOLR-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563726#comment-14563726 ] ASF subversion and git services commented on SOLR-7599: --- Commit 1682340 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1682340 ] SOLR-7599: Inline startCloudJetty method into ShardRoutingCustomTest Remove cruft from SolrCloud tests - Key: SOLR-7599 URL: https://issues.apache.org/jira/browse/SOLR-7599 Project: Solr Issue Type: Task Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: Trunk, 5.3 Attachments: SOLR-7599.patch I see many tests which blindly have distribSetUp and distribTearDown methods setting a variety of options and system properties that aren't required anymore. This is because some base test classes have been refactored such that these options are redundant. In other cases, people have copied the structure of tests blindly instead of understanding what each parameter does. Let's try to remove the unnecessary config params from such tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2309 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2309/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=13104, name=parallelCoreAdminExecutor-5594-thread-5, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=13104, name=parallelCoreAdminExecutor-5594-thread-5, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([30902326EB573028:B8C41CFC45AB5DD0]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([30902326EB573028]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1157) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:689) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:648) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:628) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:213) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1249) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:156) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10695 lines...] [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest [junit4] 2 Creating dataDir: /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.cloud.MultiThreadedOCPTest 30902326EB573028-001/init-core-data-001 [junit4] 2 2923966 INFO (SUITE-MultiThreadedOCPTest-seed#[30902326EB573028]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) [junit4] 2 2923966 INFO (SUITE-MultiThreadedOCPTest-seed#[30902326EB573028]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2 2923974 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 2923975 INFO (Thread-3753) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 2923975 INFO (Thread-3753) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 2924076 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.ZkTestServer start zk server on port:49971 [junit4] 2 2924076 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 2924077 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 2924158 INFO (zkCallback-2399-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@e1c972c name:ZooKeeperConnection Watcher:127.0.0.1:49971 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 2924158 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 2924159 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 2924159 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2 2924166 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 2924167 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 2924169 INFO (zkCallback-2400-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@4792f774 name:ZooKeeperConnection Watcher:127.0.0.1:49971/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 2924169 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 2924169 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 2924170 INFO (TEST-MultiThreadedOCPTest.test-seed#[30902326EB573028]) []
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563644#comment-14563644 ] ASF subversion and git services commented on LUCENE-6507: - Commit 1682327 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1682327 ] LUCENE-6507: don't let NativeFSLock.close release other locks NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reassigned LUCENE-6508: - Assignee: Uwe Schindler Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563809#comment-14563809 ] Anshum Gupta commented on LUCENE-6507: -- I can reproduce the same issue too. Hit this while creating the RC. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6504) implement norms with random access API
[ https://issues.apache.org/jira/browse/LUCENE-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563636#comment-14563636 ] Adrien Grand commented on LUCENE-6504: -- +1 implement norms with random access API -- Key: LUCENE-6504 URL: https://issues.apache.org/jira/browse/LUCENE-6504 Project: Lucene - Core Issue Type: Improvement Reporter: Robert Muir Attachments: LUCENE-6504.patch We added this api in LUCENE-5729 but we never explored implementing norms with it. These are generally the largest consumer of heap memory and often a real hassle for users. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563735#comment-14563735 ] Uwe Schindler commented on LUCENE-6512: --- {{_root_=PK}} is already on by default. I would propose to do the same for a new {{_parent_=true}} field. This would be consistent. Is there a killswitch for {{_root_}}? I did not find one, because to me this field is useless - unless you want to delete documents (by this you can delete the parent and all childs will be deleted, too). But the {{_parent_}} field should be mandatory (and automatic). Because block join queries would never work without them! In any case the missing check that lead to this issue was only detected by this :-) But I would hope that Solr fixes this issue (maybe like described above). ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563797#comment-14563797 ] Uwe Schindler commented on LUCENE-6508: --- In any case, we should not hurry this. We should iterate the API several times. I hope more people look into this this time. Last year when I refactored this for the first time, the interest was quite low. Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563630#comment-14563630 ] Robert Muir commented on LUCENE-6507: - Thanks for the additional cleanups mike! +1 from me. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563692#comment-14563692 ] Mikhail Khludnev commented on LUCENE-6512: -- bq. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr automatically add the parent field (it only adds a _root_ field automatically, maybe it should also add a _parent_ field automatically). There is SOLR-5211, but I can't propose a viable way. Do you mean to add {_parent_=true} and {_root_=PK} by default always? without any killswitch? ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the
[jira] [Updated] (LUCENE-6505) NRT readers don't always reflect last commit
[ https://issues.apache.org/jira/browse/LUCENE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6505: --- Fix Version/s: (was: 5.3) 5.2 I'd like to backport for 5.2.0 RC2... NRT readers don't always reflect last commit Key: LUCENE-6505 URL: https://issues.apache.org/jira/browse/LUCENE-6505 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Attachments: LUCENE-6505.patch Two cases here: * When I pull an NRT reader from IW, IR.getIndexCommit().getSegmentsFileName() should reflect what was last committed, but doesn't now * If I call IW.commit(), or IW.setCommitData(), but make no other changes, and then open a new NRT reader, I think it should reflect the new commit, but doesn't now -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Comment: was deleted (was: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC.) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6507: --- Attachment: LUCENE-6507.patch Another iteration: * Also throw exc on double obtain to HdfsLockFactory * Put back accidental test change in my last patch * Other minor cleanups I think patch is ready; that's a good catch in VerifyingLockFactory: it should NOT be trusting the LockFactory's isLocked impl... NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Attachment: SOLR-7602.patch I think just checking if a core is closed already before calling a close on it should be a good solution. The problem here is that registerCore fails, as the coreContainer is closed. This exception is then caught and an attempt is made to close the core whereas the core would already be closed by this time, causing the ref count to be -1. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms
[ https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563508#comment-14563508 ] Nicholas Knize commented on LUCENE-6481: Note: this is a diff off the LUCENE-6481 branch. Improve GeoPointField type to only visit high precision boundary terms --- Key: LUCENE-6481 URL: https://issues.apache.org/jira/browse/LUCENE-6481 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Nicholas Knize Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481_WIP.patch Current GeoPointField [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges along the space-filling curve that represent a provided bounding box. This determines which terms to visit in the terms dictionary and which to skip. This is suboptimal for large bounding boxes as we may end up visiting all terms (which could be quite large). This incremental improvement is to improve GeoPointField to only visit high precision terms in boundary ranges and use the postings list for ranges that are completely within the target bounding box. A separate improvement is to switch over to auto-prefix and build an Automaton representing the bounding box. That can be tracked in a separate issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms
[ https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563507#comment-14563507 ] Nicholas Knize commented on LUCENE-6481: Note: this is a diff off the LUCENE-6481 branch. Improve GeoPointField type to only visit high precision boundary terms --- Key: LUCENE-6481 URL: https://issues.apache.org/jira/browse/LUCENE-6481 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Nicholas Knize Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481_WIP.patch Current GeoPointField [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges along the space-filling curve that represent a provided bounding box. This determines which terms to visit in the terms dictionary and which to skip. This is suboptimal for large bounding boxes as we may end up visiting all terms (which could be quite large). This incremental improvement is to improve GeoPointField to only visit high precision terms in boundary ranges and use the postings list for ranges that are completely within the target bounding box. A separate improvement is to switch over to auto-prefix and build an Automaton representing the bounding box. That can be tracked in a separate issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms
[ https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563506#comment-14563506 ] Nicholas Knize commented on LUCENE-6481: Note: this is a diff off the LUCENE-6481 branch. Improve GeoPointField type to only visit high precision boundary terms --- Key: LUCENE-6481 URL: https://issues.apache.org/jira/browse/LUCENE-6481 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Nicholas Knize Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481_WIP.patch Current GeoPointField [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges along the space-filling curve that represent a provided bounding box. This determines which terms to visit in the terms dictionary and which to skip. This is suboptimal for large bounding boxes as we may end up visiting all terms (which could be quite large). This incremental improvement is to improve GeoPointField to only visit high precision terms in boundary ranges and use the postings list for ranges that are completely within the target bounding box. A separate improvement is to switch over to auto-prefix and build an Automaton representing the bounding box. That can be tracked in a separate issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] 5.2.0 RC1
If we are going to respin I'd like to backport LUCENE-6505 too... Mike McCandless http://blog.mikemccandless.com On Thu, May 28, 2015 at 1:07 PM, Anshum Gupta ans...@anshumgupta.net wrote: Sure, I'll re-spin once you get it into the branch. Thanks for fixing this! On Thu, May 28, 2015 at 7:16 AM, Robert Muir rcm...@gmail.com wrote: I think we should respin due to https://issues.apache.org/jira/browse/LUCENE-6507. NativeFSLockFactory has race conditions, which can cause valid locks to become invalidated by another thread in some situations. We already have a test + fix but JIRA is extremely slow and the issue needs more review and testing on different operating systems. On Wed, May 27, 2015 at 4:04 PM, Anshum Gupta ans...@anshumgupta.net wrote: Please vote for the first release candidate for Lucene/Solr 5.2.0 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 I've run the test suite a few times, the smoke tester, basic collection creation, startup, indexing, and query. Here's my +1. SUCCESS! [0:31:23.943785] P.S: I hit failure in MultiThreadedOCPTest 2 times while creating the RC, so I'm looking at what's triggering it in parallel to make sure that we're not overlooking a problem. As it has been failing on Jenkins frequently, I've created SOLR-7602 to track this. -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563287#comment-14563287 ] Uwe Schindler commented on LUCENE-6512: --- In trunk and 5.x the same happens, just that -1 is replaced by DocIdSetIterator.NO_MORE_DOCS. But leads to the same problem. ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality is broken. By that the user gets the information that his index is broken and contains child documents without a parent at the very end of a segment. I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open this here for investigation. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to
[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.
[ https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563361#comment-14563361 ] ASF subversion and git services commented on SOLR-6820: --- Commit 1682293 from [~thelabdude] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1682293 ] SOLR-6820: fix numVersionBuckets name attribute in configsets The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication. - Key: SOLR-6820 URL: https://issues.apache.org/jira/browse/SOLR-6820 Project: Solr Issue Type: Sub-task Components: SolrCloud Reporter: Mark Miller Assignee: Timothy Potter Fix For: Trunk, 5.2 Attachments: SOLR-6820.patch, threads.png -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Areek Zillur closed LUCENE-6510. Resolution: Fixed TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Attachment: SOLR-7602.patch Added a null check. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.
[ https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Potter resolved SOLR-6820. -- Resolution: Fixed Fixed solrconfig.xmls - ready to go for 5.2 The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication. - Key: SOLR-6820 URL: https://issues.apache.org/jira/browse/SOLR-6820 Project: Solr Issue Type: Sub-task Components: SolrCloud Reporter: Mark Miller Assignee: Timothy Potter Fix For: Trunk, 5.2 Attachments: SOLR-6820.patch, threads.png -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563293#comment-14563293 ] Michael McCandless commented on LUCENE-6507: 114 iterations of all Lucene core+module tests and no failures ... NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6512: -- Attachment: LUCENE-6512.patch Patch that solves the issue in trunk and 5.x. In 4.10.x, we must replace the NO_MORE_DOCS by -1. ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality is broken. By that the user gets the information that his index is broken and contains child documents without a parent at the very end of a segment. I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open this here for investigation. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563379#comment-14563379 ] Robert Muir commented on LUCENE-6507: - {quote} I also had to fix/relax MockDirectoryWrapper.AssertingLock's behavior if you call .obtain twice on a single lock ... it was clearing its obtained member, but I don't think it should. {quote} IMO we should deliver an exception if you do this. There is no need for leniency that returns false. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563574#comment-14563574 ] Adrien Grand commented on LUCENE-6512: -- +1 ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality is broken. By that the user gets the information that his index is broken and contains child documents without a parent at the very end of a segment. I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open this here for investigation. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563326#comment-14563326 ] Robert Muir commented on LUCENE-6508: - Random ideas to make this better: * remove timeouts * remove Lock.isLocked(), Lock.obtain(), IndexWriter.isLocked(Dir), etc * just have Directory.obtain() which either succeeds and gives you Closeable or gives IOException * obtain() should return an immutable thing, that will simplify a lot here. * maybe Directory should know of the lock and check on each createOutput, delete, rename, etc. This would give more safety. * maybe add method Lock.isValid(). For network filesystems, things like disconnected nodes can cause locks to be lost. look into things like FileLock.isValid and see if they are useful. (SimpleFS can implement with Files.exists). Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6507: --- Attachment: LUCENE-6507.patch New patch, fixing SingleInstanceLF to not set obtained to false if you try to obtain it twice, plus a failing test. I also had to fix/relax MockDirectoryWrapper.AssertingLock's behavior if you call .obtain twice on a single lock ... it was clearing its obtained member, but I don't think it should. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6505) NRT readers don't always reflect last commit
[ https://issues.apache.org/jira/browse/LUCENE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563388#comment-14563388 ] ASF subversion and git services commented on LUCENE-6505: - Commit 1682296 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1682296 ] LUCENE-6505: NRT readers now reflect prior commit metadata NRT readers don't always reflect last commit Key: LUCENE-6505 URL: https://issues.apache.org/jira/browse/LUCENE-6505 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Attachments: LUCENE-6505.patch Two cases here: * When I pull an NRT reader from IW, IR.getIndexCommit().getSegmentsFileName() should reflect what was last committed, but doesn't now * If I call IW.commit(), or IW.setCommitData(), but make no other changes, and then open a new NRT reader, I think it should reflect the new commit, but doesn't now -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Comment: was deleted (was: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC.) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Comment: was deleted (was: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC.) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Comment: was deleted (was: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC.) Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
Uwe Schindler created LUCENE-6512: - Summary: ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality is broken. By that the user gets the information that his index is broken and contains child documents without a parent at the very end of a segment. I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open this here for investigation. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr automatically add the parent field (it only adds a _root_ field automatically, maybe it should also add a _parent_ field automatically). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6743) Support deploying SolrCloud on YARN
[ https://issues.apache.org/jira/browse/SOLR-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563281#comment-14563281 ] Yonik Seeley commented on SOLR-6743: Great job Tim! Support deploying SolrCloud on YARN --- Key: SOLR-6743 URL: https://issues.apache.org/jira/browse/SOLR-6743 Project: Solr Issue Type: New Feature Components: Hadoop Integration, SolrCloud Reporter: Timothy Potter Assignee: Timothy Potter We're seeing Solr running with Hadoop more and more and YARN allows us to deploy and manage distributed applications across a cluster of machines. This feature will provide support for deploying SolrCloud in YARN. Currently, the code is implemented in an open-source project hosted on Lucidworks github, see: https://github.com/LucidWorks/yarn-proto We'd like to submit this to the Apache Solr project as a contrib so it is easier to run Solr on YARN right out-of-the-box. There are a few hurdles to get over though: 1) Overall approach: There are various options for supporting YARN, such as Apache Slider, but I opted to just use the YARN client API directly which simply invokes the bin/solr start script under the covers. The YARN specific code is quite simple and most of the code is just handling command line options/parsing. I'm curious what others think about having a simple native solution that ships with Solr (similar to the HdfsDirectoryFactory) vs. something more heavy-weight that requires 3rd party tools to be involved. 2) Unit testing - Solr on YARN relies on putting a full Solr bundle into HDFS (which you can see how that might work in the SolrYarnTestIT test case). This obviously has problems in the Solr build as there is no bundle of Solr available during unit testing. I'm thinking about having a mock bundle that simulates starting Solr but that limits what we can verify on the cluster once it's up. 3) Shutdown - In order to support an orderly shutdown of Solr when the application is stopped by the ResourceManager, we need a shutdown handler in Jetty/Solr that allows a remote application to request shutdown. The built-in Jetty shutdown handler requires the stop request to come from localhost. To work-around this, I've introduced a custom ShutdownHandler that can be configured using System properties at startup to allow a remote host to request shutdown. When YARN starts Solr nodes, I register the address of the SolrMaster node with a secret key that will allow the SolrMaster to shutdown Solr gracefully. This seems secure since only the SolrMaster can request shutdown using the correct key. Other ideas on how to handle graceful shutdown? 4) Additional features: The current implementation is useful for starting/stopping SolrCloud nodes in YARN. My thinking is that you'll provision the cluster using YARN and then just interact with Solr directly using Solr's API , so the YARN layer is quite thin. Other features needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] 5.2.0 RC1
+1 Mike. On Thu, May 28, 2015 at 10:24 AM, Michael McCandless luc...@mikemccandless.com wrote: If we are going to respin I'd like to backport LUCENE-6505 too... Mike McCandless http://blog.mikemccandless.com On Thu, May 28, 2015 at 1:07 PM, Anshum Gupta ans...@anshumgupta.net wrote: Sure, I'll re-spin once you get it into the branch. Thanks for fixing this! On Thu, May 28, 2015 at 7:16 AM, Robert Muir rcm...@gmail.com wrote: I think we should respin due to https://issues.apache.org/jira/browse/LUCENE-6507. NativeFSLockFactory has race conditions, which can cause valid locks to become invalidated by another thread in some situations. We already have a test + fix but JIRA is extremely slow and the issue needs more review and testing on different operating systems. On Wed, May 27, 2015 at 4:04 PM, Anshum Gupta ans...@anshumgupta.net wrote: Please vote for the first release candidate for Lucene/Solr 5.2.0 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 I've run the test suite a few times, the smoke tester, basic collection creation, startup, indexing, and query. Here's my +1. SUCCESS! [0:31:23.943785] P.S: I hit failure in MultiThreadedOCPTest 2 times while creating the RC, so I'm looking at what's triggering it in parallel to make sure that we're not overlooking a problem. As it has been failing on Jenkins frequently, I've created SOLR-7602 to track this. -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 12683 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12683/ Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: commitWithin did not work on node: http://127.0.0.1:54181/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:54181/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([23930BF6971E9EB0:ABC7342C39E2F348]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:344) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Commented] (LUCENE-6505) NRT readers don't always reflect last commit
[ https://issues.apache.org/jira/browse/LUCENE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563407#comment-14563407 ] ASF subversion and git services commented on LUCENE-6505: - Commit 1682301 from [~mikemccand] in branch 'dev/branches/lucene_solr_5_2' [ https://svn.apache.org/r1682301 ] LUCENE-6505: NRT readers now reflect prior commit metadata NRT readers don't always reflect last commit Key: LUCENE-6505 URL: https://issues.apache.org/jira/browse/LUCENE-6505 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Attachments: LUCENE-6505.patch Two cases here: * When I pull an NRT reader from IW, IR.getIndexCommit().getSegmentsFileName() should reflect what was last committed, but doesn't now * If I call IW.commit(), or IW.setCommitData(), but make no other changes, and then open a new NRT reader, I think it should reflect the new commit, but doesn't now -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6505) NRT readers don't always reflect last commit
[ https://issues.apache.org/jira/browse/LUCENE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6505. Resolution: Fixed NRT readers don't always reflect last commit Key: LUCENE-6505 URL: https://issues.apache.org/jira/browse/LUCENE-6505 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Attachments: LUCENE-6505.patch Two cases here: * When I pull an NRT reader from IW, IR.getIndexCommit().getSegmentsFileName() should reflect what was last committed, but doesn't now * If I call IW.commit(), or IW.setCommitData(), but make no other changes, and then open a new NRT reader, I think it should reflect the new commit, but doesn't now -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Areek Zillur resolved LUCENE-6510. -- Resolution: Fixed TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Areek Zillur reopened LUCENE-6510: -- TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6481) Improve GeoPointField type to only visit high precision boundary terms
[ https://issues.apache.org/jira/browse/LUCENE-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6481: --- Attachment: LUCENE-6481.patch Updates: * cache ranges across segments * only add ranges that are either within or cross the boundary of the bbox or polygon In exotic cases this latter fix drastically reduces the number of ranges added since it avoids unnecessary exterior cells that only touch the boundary. The downside is since the random test doesn't currently use the TOLERANCE criteria it occasionally fails due computation error at 1e-7 precision. This can be tweaked in the next patch. Improve GeoPointField type to only visit high precision boundary terms --- Key: LUCENE-6481 URL: https://issues.apache.org/jira/browse/LUCENE-6481 Project: Lucene - Core Issue Type: Improvement Components: core/index Reporter: Nicholas Knize Attachments: LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481.patch, LUCENE-6481_WIP.patch Current GeoPointField [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] computes a set of ranges along the space-filling curve that represent a provided bounding box. This determines which terms to visit in the terms dictionary and which to skip. This is suboptimal for large bounding boxes as we may end up visiting all terms (which could be quite large). This incremental improvement is to improve GeoPointField to only visit high precision terms in boundary ranges and use the postings list for ranges that are completely within the target bounding box. A separate improvement is to switch over to auto-prefix and build an Automaton representing the bounding box. That can be tracked in a separate issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563570#comment-14563570 ] Uwe Schindler commented on LUCENE-6507: --- +1 much better NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-6512) ToParentBlockJoinQuery fails with AIOOBE under certain circumstances
[ https://issues.apache.org/jira/browse/LUCENE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reassigned LUCENE-6512: - Assignee: Uwe Schindler ToParentBlockJoinQuery fails with AIOOBE under certain circumstances Key: LUCENE-6512 URL: https://issues.apache.org/jira/browse/LUCENE-6512 Project: Lucene - Core Issue Type: Bug Components: modules/join Affects Versions: 4.10.4 Reporter: Uwe Schindler Assignee: Uwe Schindler Attachments: LUCENE-6512.patch I had a customer using BlockJoin with Solr. He executed a block join query and the following appeared in Solr's logs: {noformat} 28 May 2015 17:19:20 ERROR (SolrException.java:131) - java.lang.ArrayIndexOutOfBoundsException: -1 at org.apache.lucene.codecs.lucene40.BitVector.get(BitVector.java:149) at org.apache.lucene.search.join.ToParentBlockJoinQuery$BlockJoinScorer.nextDoc(ToParentBlockJoinQuery.java:293) at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:192) at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163) at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:297) at org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:209) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1619) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1433) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) {noformat} I debugged this stuff and found out when this happens: The last block of documents was not followed by a parent. If one of the child documents without a parent at the end of the index match the inner query, scorer calls nextSetBit() to find next parent document. This returns -1. There is an assert afterwards that checks for -1, but in production code, this is of course never executed. If the index has deletetions the false -1 is passed to acceptDocs and then triggers the above problem. We should change the assert to another IllegalStateException() which is used to notify the user if the orthogonality is broken. By that the user gets the information that his index is broken and contains child documents without a parent at the very end of a segment. I have seen this on 4.10.4. Maybe thats already fixed in 5.0, but I just open this here for investigation. This was clearly a problem in the index, but due to Solr's buggy implementation of parent/child documents (you have to set the parent flag in contrast to Elasticsearch on your own - which is stupid!!!) this was not detected at indexing time. We should open an issue in Solr to fix this bad behaviour and make solr automatically add the parent
[jira] [Commented] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1456#comment-1456 ] ASF subversion and git services commented on LUCENE-6510: - Commit 1682289 from [~areek] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682289 ] LUCENE-6510: take path boosts into account when polling TopNSearcher queue TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563334#comment-14563334 ] ASF subversion and git services commented on LUCENE-6510: - Commit 1682290 from [~areek] in branch 'dev/trunk' [ https://svn.apache.org/r1682290 ] LUCENE-6510: take path boosts into account when polling TopNSearcher queue TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.
[ https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563336#comment-14563336 ] ASF subversion and git services commented on SOLR-6820: --- Commit 1682291 from [~thelabdude] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682291 ] SOLR-6820: fix numVersionBuckets name attribute in configsets The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication. - Key: SOLR-6820 URL: https://issues.apache.org/jira/browse/SOLR-6820 Project: Solr Issue Type: Sub-task Components: SolrCloud Reporter: Mark Miller Assignee: Timothy Potter Fix For: Trunk, 5.2 Attachments: SOLR-6820.patch, threads.png -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6505) NRT readers don't always reflect last commit
[ https://issues.apache.org/jira/browse/LUCENE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563398#comment-14563398 ] ASF subversion and git services commented on LUCENE-6505: - Commit 1682299 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1682299 ] LUCENE-6505: NRT readers now reflect prior commit metadata NRT readers don't always reflect last commit Key: LUCENE-6505 URL: https://issues.apache.org/jira/browse/LUCENE-6505 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Attachments: LUCENE-6505.patch Two cases here: * When I pull an NRT reader from IW, IR.getIndexCommit().getSegmentsFileName() should reflect what was last committed, but doesn't now * If I call IW.commit(), or IW.setCommitData(), but make no other changes, and then open a new NRT reader, I think it should reflect the new commit, but doesn't now -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563542#comment-14563542 ] Noble Paul commented on SOLR-7602: -- It's fine Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] 5.2.0 RC1
Sure, I'll re-spin once you get it into the branch. Thanks for fixing this! On Thu, May 28, 2015 at 7:16 AM, Robert Muir rcm...@gmail.com wrote: I think we should respin due to https://issues.apache.org/jira/browse/LUCENE-6507. NativeFSLockFactory has race conditions, which can cause valid locks to become invalidated by another thread in some situations. We already have a test + fix but JIRA is extremely slow and the issue needs more review and testing on different operating systems. On Wed, May 27, 2015 at 4:04 PM, Anshum Gupta ans...@anshumgupta.net wrote: Please vote for the first release candidate for Lucene/Solr 5.2.0 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.2.0-RC1-rev1682085 I've run the test suite a few times, the smoke tester, basic collection creation, startup, indexing, and query. Here's my +1. SUCCESS! [0:31:23.943785] P.S: I hit failure in MultiThreadedOCPTest 2 times while creating the RC, so I'm looking at what's triggering it in parallel to make sure that we're not overlooking a problem. As it has been failing on Jenkins frequently, I've created SOLR-7602 to track this. -- Anshum Gupta - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Anshum Gupta
[jira] [Commented] (SOLR-7570) Config APIs should not modify the ConfigSet
[ https://issues.apache.org/jira/browse/SOLR-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563258#comment-14563258 ] Noble Paul commented on SOLR-7570: -- * Do we have a plan on the backcompat ? What happens to the existing configoverlay.json etc ? * What happens if I really wish to store the config changes shared between collections. It is a common usecase Config APIs should not modify the ConfigSet --- Key: SOLR-7570 URL: https://issues.apache.org/jira/browse/SOLR-7570 Project: Solr Issue Type: Improvement Reporter: Tomás Fernández Löbbe Attachments: SOLR-7570.patch Originally discussed here: http://mail-archives.apache.org/mod_mbox/lucene-dev/201505.mbox/%3CCAMJgJxSXCHxDzJs5-C-pKFDEBQD6JbgxB=-xp7u143ekmgp...@mail.gmail.com%3E The ConfigSet used to create a collection should be read-only. Changes made via any of the Config APIs should only be applied to the collection where the operation is done and no to other collections that may be using the same ConfigSet. As discussed in the dev list: When a collection is created we should have two things, an immutable part (the ConfigSet) and a mutable part (configoverlay, generated schema, etc). The ConfigSet will still be placed in ZooKeeper under /configs but the mutable part should be placed under /collections/$COLLECTION_NAME/… [~romseygeek] suggested: {quote} A nice way of doing it would be to make it part of the SolrResourceLoader interface. The ZK resource loader could check in the collection-specific zknode first, and then under configs/, and we could add a writeResource() method that writes to the collection-specific node as well. Then all config I/O goes via the resource loader, and we have a way of keeping certain parts immutable. {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-6507: Attachment: LUCENE-6507.patch Updated patch removing my changes to SimpleFSLockFactory.isLocked() I didn't mean to change the semantics for this _totally unnecessary method_ (unused by lucene). Of course, no tests fail either way, and this is bogus unnecessary stuff in our locking api. Its a search engine library, not a filelocking library.. IndexWriter.isLocked needs to die, like, as fast as possible, as well as Lock.isLocked. We cant even get the basics right, i dont know why we have stupid methods like this. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6820) The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication.
[ https://issues.apache.org/jira/browse/SOLR-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563320#comment-14563320 ] ASF subversion and git services commented on SOLR-6820: --- Commit 1682288 from [~thelabdude] in branch 'dev/trunk' [ https://svn.apache.org/r1682288 ] SOLR-6820: fix numVersionBuckets name attribute in configsets The sync on the VersionInfo bucket in DistributedUpdateProcesser#addDocument appears to be a large bottleneck when using replication. - Key: SOLR-6820 URL: https://issues.apache.org/jira/browse/SOLR-6820 Project: Solr Issue Type: Sub-task Components: SolrCloud Reporter: Mark Miller Assignee: Timothy Potter Fix For: Trunk, 5.2 Attachments: SOLR-6820.patch, threads.png -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563376#comment-14563376 ] Anshum Gupta commented on SOLR-7602: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563373#comment-14563373 ] Anshum Gupta commented on SOLR-7602: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563374#comment-14563374 ] Anshum Gupta commented on SOLR-7602: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563377#comment-14563377 ] Anshum Gupta commented on SOLR-7602: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563375#comment-14563375 ] Anshum Gupta commented on SOLR-7602: hmmm, I'm looking at other code paths now. Plan to get this into the next 5.2 RC. Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6507: --- Attachment: LUCENE-6507.patch bq. IMO we should deliver an exception if you do this. Good idea, I changed it to throw LockObtainFailedExc if you (stupidly) try to call .obtain twice on a single instance, and added test cases for the 3 core LockFactory impls (minus NoLockFactory). NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563469#comment-14563469 ] Michael McCandless commented on LUCENE-6508: +1 to this plan Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6510) TestContextQuery.testRandomContextQueryScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563470#comment-14563470 ] Michael McCandless commented on LUCENE-6510: Thanks [~areek]! TestContextQuery.testRandomContextQueryScoring failure -- Key: LUCENE-6510 URL: https://issues.apache.org/jira/browse/LUCENE-6510 Project: Lucene - Core Issue Type: Bug Components: modules/spellchecker Reporter: Michael McCandless Assignee: Areek Zillur Fix For: Trunk, 5.3 {noformat} [junit4] Started J0 PID(8355@localhost). [junit4] Suite: org.apache.lucene.search.suggest.document.TestContextQuery [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestContextQuery -Dtests.method=testRandomContextQueryScoring -Dtests.seed=F3A3A7E94AC9CB6D -Dtests.slow=true -Dtests.locale=es_ES -Dtests.timezone=Zulu -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] ERROR 0.74s | TestContextQuery.testRandomContextQueryScoring [junit4] Throwable #1: java.lang.AssertionError: Expected: key:sugg_yafiszhkyq2 score:859398.0 context:evoyj6 Actual: key:sugg_mfbt11 score:841758.0 context:evoyj6 [junit4] Expected: sugg_yafiszhkyq2 [junit4] got: sugg_mfbt11 [junit4] at org.apache.lucene.search.suggest.document.TestSuggestField.assertSuggestions(TestSuggestField.java:608) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:528) [junit4] at java.lang.Thread.run(Thread.java:745)Throwable #2: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still open files: {_0.cfs=1} [junit4] at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:749) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.after(TestContextQuery.java:56) [junit4] at java.lang.Thread.run(Thread.java:745) [junit4] Caused by: java.lang.RuntimeException: unclosed IndexInput: _0.cfs [junit4] at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:624) [junit4] at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:668) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundReader.init(Lucene50CompoundReader.java:71) [junit4] at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.getCompoundReader(Lucene50CompoundFormat.java:71) [junit4] at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:93) [junit4] at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:132) [junit4] at org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:184) [junit4] at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:99) [junit4] at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:433) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:342) [junit4] at org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:279) [junit4] at org.apache.lucene.search.suggest.document.TestContextQuery.testRandomContextQueryScoring(TestContextQuery.java:521) [junit4] ... 28 more [junit4] 2 NOTE: test params are: codec=Asserting(Lucene50), sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {suggest_field=DFR GBZ(0.3)}, locale=es_ES, timezone=Zulu [junit4] 2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 1.8.0_40 (64-bit)/cpus=8,threads=1,free=388652544,total=504889344 [junit4] 2 NOTE: All tests run in this JVM: [TestContextQuery] [junit4] Completed [1/1] in 1.14s, 1 test, 1 error FAILURES! {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563883#comment-14563883 ] Robert Muir commented on LUCENE-6507: - Feel free to propose your use case, where there is valid code not handling the previous IOException, with some valid use case for calling obtain() on an already-obtained lock. Just test bugs. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563885#comment-14563885 ] Mark Miller commented on LUCENE-6507: - You guys are just howling into the air...please reread and or get a clue. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support
[ https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563967#comment-14563967 ] ASF subversion and git services commented on LUCENE-6487: - Commit 1682357 from [~dsmiley] in branch 'dev/branches/lucene6487' [ https://svn.apache.org/r1682357 ] LUCENE-6487: Geo3D with WGS84 patch from Karl: fix bug in GeoPoint.getLongitude with test from https://reviews.apache.org/r/34744/diff/raw/ Add WGS84 capability to geo3d support - Key: LUCENE-6487 URL: https://issues.apache.org/jira/browse/LUCENE-6487 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch WGS84 compatibility has been requested for geo3d. This involves working with an ellipsoid rather than a unit sphere. The general formula for an ellipsoid is: x^2/a^2 + y^2/b^2 + z^2/c^2 = 1 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7599) Remove cruft from SolrCloud tests
[ https://issues.apache.org/jira/browse/SOLR-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563998#comment-14563998 ] David Smiley commented on SOLR-7599: Cleaning up crap like this is usually a thankless task, but I hereby thank you for it! Remove cruft from SolrCloud tests - Key: SOLR-7599 URL: https://issues.apache.org/jira/browse/SOLR-7599 Project: Solr Issue Type: Task Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: Trunk, 5.3 Attachments: SOLR-7599.patch I see many tests which blindly have distribSetUp and distribTearDown methods setting a variety of options and system properties that aren't required anymore. This is because some base test classes have been refactored such that these options are redundant. In other cases, people have copied the structure of tests blindly instead of understanding what each parameter does. Let's try to remove the unnecessary config params from such tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-5.2 - Build # 6 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.2/6/ 4 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Captured an uncaught exception in thread: Thread[id=7854, name=collection4, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=7854, name=collection4, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: java.lang.RuntimeException: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:35784, http://127.0.0.1:46980, http://127.0.0.1:50962, http://127.0.0.1:37941, http://127.0.0.1:52086] at __randomizedtesting.SeedInfo.seed([A64FFB27BB2DE866]:0) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:887) Caused by: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:35784, http://127.0.0.1:46980, http://127.0.0.1:50962, http://127.0.0.1:37941, http://127.0.0.1:52086] at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1621) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1642) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:877) Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:46980: KeeperErrorCode = Session expired for /overseer/collection-queue-work/qn- at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328) ... 7 more FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: ERROR: SolrIndexSearcher opens=497 closes=382 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=497 closes=382 at __randomizedtesting.SeedInfo.seed([A64FFB27BB2DE866]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:496) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:232) at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins
[ https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7602: --- Fix Version/s: 5.2 Frequent MultiThreadedOCPTest failures on Jenkins - Key: SOLR-7602 URL: https://issues.apache.org/jira/browse/SOLR-7602 Project: Solr Issue Type: Bug Reporter: Anshum Gupta Fix For: 5.2 Attachments: SOLR-7602.patch, SOLR-7602.patch The number of failed MultiThreadedOCPTest runs on Jenkins has gone up drastically since Apr 30, 2015. {code} REGRESSION: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6313, name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest] at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0) Caused by: java.lang.AssertionError: Too many closes on SolrCore at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0) at org.apache.solr.core.SolrCore.close(SolrCore.java:1138) at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212) at org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {code} Last failure: Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563882#comment-14563882 ] Uwe Schindler commented on LUCENE-6507: --- Double obtains were not really supported and the behaviour was mhm undefined. So I think it's better that Robert and Mike fixed it to be consistent. Unfortunately we just missed to fix this test, but thats already fixed! NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6508: -- Attachment: LUCENE-6508-deadcode1.patch First patch removing this dead code. Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler Attachments: LUCENE-6508-deadcode1.patch See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7576) Implement RequestHandler in Javascript
[ https://issues.apache.org/jira/browse/SOLR-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564099#comment-14564099 ] David Smiley commented on SOLR-7576: This duplicates SOLR-5005; but the approach appears a little different. Perhaps you forgot about that issue? I admit I've been too busy to finish that up; not that it doesn't work but could use more polish tests. Implement RequestHandler in Javascript -- Key: SOLR-7576 URL: https://issues.apache.org/jira/browse/SOLR-7576 Project: Solr Issue Type: New Feature Reporter: Noble Paul Attachments: SOLR-7576.patch Solr now support dynamic loading (SOLR-7073) of components and it is secured in SOLR-7126 We can extend the same functionality with JS as well example of creating a RequestHandler {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ create-requesthandler : {name: jshandler , class:solr.JSRequestHandler, defaults: { js: myreqhandlerjs, //this is the name of the blob in .system collection version:3, sig:mW1Gwtz2QazjfVdrLFHfbGwcr8xzFYgUOLu68LHqWRDvLG0uLcy1McQ+AzVmeZFBf1yLPDEHBWJb5KXr8bdbHN/PYgUB1nsr9pk4EFyD9KfJ8TqeH/ijQ9waa/vjqyiKEI9U550EtSzruLVZ32wJ7smvV0fj2YYhrUaaPzOn9g0= } } }' {code} To make this work * Solr should be started with {{-Denable.runtime.lib=true}} * The javascript must be loaded to the {{.system}} collection using the blob store API * Configure the requesthandler with the JS blob name and version * Sign the javascript and configure the signature if security is enabled The {{JSRequestHandler}} is implicitly defined and it can be accessed by hitting {{/js/jsname/version}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563896#comment-14563896 ] Mark Miller commented on LUCENE-6507: - I'll lend a hand and spell it out. Anshum asked if I'd look at this issue as it involves hdfs and the release. I looked at it. I found that: bq. Just a change in API behavior. Previously a double obtain was returning false and now it's throwing an exception. This is true. I don't care how smart you think you are. By then, Robert had made a further commit. Beyond that, not much to see here. Chill out. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563894#comment-14563894 ] Robert Muir commented on LUCENE-6507: - On the contrary, I already fixed the test (and mike had already added an explicit separate test for double-obtain for HDFS). Looks like the ones howling into the air are... the peanut gallery, not the do-ers. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6507) NativeFSLock.close() can invalidate other locks
[ https://issues.apache.org/jira/browse/LUCENE-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563893#comment-14563893 ] Anshum Gupta commented on LUCENE-6507: -- I think all's sorted for now. Thanks everyone :) P.S: I've started the 5.2 RC2 build. NativeFSLock.close() can invalidate other locks --- Key: LUCENE-6507 URL: https://issues.apache.org/jira/browse/LUCENE-6507 Project: Lucene - Core Issue Type: Bug Reporter: Simon Willnauer Priority: Blocker Fix For: 4.10.5, 5.2 Attachments: LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch, LUCENE-6507.patch the lock API in Lucene is super trappy since the lock that we return form this API must first be obtained and if we can't obtain it the lock should not be closed since we might ie. close the underlying channel in the NativeLock case which releases all lock for this file on some operating systems. I think the makeLock method should try to obtain and only return a lock if we successfully obtained it. Not sure if it's possible everywhere but we should at least make the documentation clear here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-7599) Remove cruft from SolrCloud tests
+1 On Thu, May 28, 2015 at 5:38 PM, David Smiley (JIRA) j...@apache.org wrote: [ https://issues.apache.org/jira/browse/SOLR-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563998#comment-14563998 ] David Smiley commented on SOLR-7599: Cleaning up crap like this is usually a thankless task, but I hereby thank you for it! Remove cruft from SolrCloud tests - Key: SOLR-7599 URL: https://issues.apache.org/jira/browse/SOLR-7599 Project: Solr Issue Type: Task Components: SolrCloud, Tests Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: Trunk, 5.3 Attachments: SOLR-7599.patch I see many tests which blindly have distribSetUp and distribTearDown methods setting a variety of options and system properties that aren't required anymore. This is because some base test classes have been refactored such that these options are redundant. In other cases, people have copied the structure of tests blindly instead of understanding what each parameter does. Let's try to remove the unnecessary config params from such tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6487) Add WGS84 capability to geo3d support
[ https://issues.apache.org/jira/browse/LUCENE-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14564075#comment-14564075 ] ASF subversion and git services commented on LUCENE-6487: - Commit 1682359 from [~dsmiley] in branch 'dev/branches/lucene6487' [ https://svn.apache.org/r1682359 ] LUCENE-6487: Geo3D with WGS84: randomize GeoPointTest lat-lon round-trip Add WGS84 capability to geo3d support - Key: LUCENE-6487 URL: https://issues.apache.org/jira/browse/LUCENE-6487 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Reporter: Karl Wright Attachments: LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch, LUCENE-6487.patch WGS84 compatibility has been requested for geo3d. This involves working with an ellipsoid rather than a unit sphere. The general formula for an ellipsoid is: x^2/a^2 + y^2/b^2 + z^2/c^2 = 1 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (LUCENE-6200) Highlighter sometime went wrong
[ https://issues.apache.org/jira/browse/LUCENE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley closed LUCENE-6200. Resolution: Duplicate Closing as duplicate. If it's any consolation; at least there are two other highlighters to choose from -- not that they are equal. Highlighter sometime went wrong --- Key: LUCENE-6200 URL: https://issues.apache.org/jira/browse/LUCENE-6200 Project: Lucene - Core Issue Type: Bug Components: modules/highlighter Affects Versions: 4.10.2 Reporter: thihy Labels: highlighter I have write a test case for this. I expect B游戏/B是B游戏/B, but get B游戏是游戏/B {code:java} public static void main(String[] args) throws IOException, InvalidTokenOffsetsException { String text = 游戏是游戏; String query = 游戏; CJKAnalyzer analyzer = new CJKAnalyzer(); Scorer fragmentScorer = new QueryScorer(new TermQuery(new Term(field, query))); Highlighter highlighter = new Highlighter(fragmentScorer); String fragment = highlighter.getBestFragment( analyzer.tokenStream(field, text), text); analyzer.close(); System.out.println(fragment); // println: B游戏是游戏/B } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6508) Simplify Directory/lock api
[ https://issues.apache.org/jira/browse/LUCENE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14563926#comment-14563926 ] Uwe Schindler commented on LUCENE-6508: --- Just for record, so we don't forget: - the inner class oal.store.Lock.With is obsolete, it is no longer used anywhere. Can be removed ASAP. It was just there before the famous Java 7 try-with-resources was added. As Lock is closeable now, you can do: {code:java} try (Lock lock = directory.obtainLock(name)) { // do stuff } {code} So the class is obsolete. Simplify Directory/lock api --- Key: LUCENE-6508 URL: https://issues.apache.org/jira/browse/LUCENE-6508 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Uwe Schindler See LUCENE-6507 for some background. In general it would be great if you can just acquire an immutable lock (or you get a failure) and then you close that to release it. Today the API might be too much for what is needed by IW. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org