[jira] [Commented] (SOLR-5176) Chocolatey package for Windows
[ https://issues.apache.org/jira/browse/SOLR-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15756586#comment-15756586 ] Manimaran Chandrasekaran commented on SOLR-5176: Hi Andrew This package with 6.3.0 is moderation now and soon it should be available > Chocolatey package for Windows > -- > > Key: SOLR-5176 > URL: https://issues.apache.org/jira/browse/SOLR-5176 > Project: Solr > Issue Type: Improvement > Components: Build > Environment: Chocolatey (http://chocolatey.org/) > Windows XP+ >Reporter: Andrew Pennebaker >Priority: Minor > > Could we simplify the installation process for Windows users by providing a > Chocolatey package? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9120) Luke NoSuchFileException
[ https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15756478#comment-15756478 ] Gopalakrishnan B commented on SOLR-9120: Hi Team, do we have any update on this? Thanks. > Luke NoSuchFileException > > > Key: SOLR-9120 > URL: https://issues.apache.org/jira/browse/SOLR-9120 > Project: Solr > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Markus Jelsma > > On Solr 6.0, we frequently see the following errors popping up: > {code} > java.nio.file.NoSuchFileException: > /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5 > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) > at > sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.nio.file.Files.readAttributes(Files.java:1737) > at java.nio.file.Files.size(Files.java:2332) > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) > at > org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131) > at > org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597) > at > org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585) > at > org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9874) CREATEALIAS should fail if target collections don't exist
[ https://issues.apache.org/jira/browse/SOLR-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated SOLR-9874: Attachment: SOLR-9874.patch > CREATEALIAS should fail if target collections don't exist > - > > Key: SOLR-9874 > URL: https://issues.apache.org/jira/browse/SOLR-9874 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tomás Fernández Löbbe >Assignee: Tomás Fernández Löbbe >Priority: Minor > Attachments: SOLR-9874.patch > > > As discussed > [here|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201612.mbox/%3CCAMJgJxSoY9hovujET0V8D3ywyBf%3DrDZTz9WxZABx-wUYaO4jKg%40mail.gmail.com%3E], > we should fail requests to CREATEALIAS if the target collection doesnt't > exist -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9875) DELETE collection command should fail if there is an alias pointing to it
Tomás Fernández Löbbe created SOLR-9875: --- Summary: DELETE collection command should fail if there is an alias pointing to it Key: SOLR-9875 URL: https://issues.apache.org/jira/browse/SOLR-9875 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Tomás Fernández Löbbe As discussed [here|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201612.mbox/%3CCAMJgJxSoY9hovujET0V8D3ywyBf%3DrDZTz9WxZABx-wUYaO4jKg%40mail.gmail.com%3E], Solr should fail requests to DELETE collection if it's a target of an existing alias -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9874) CREATEALIAS should fail if target collections don't exist
Tomás Fernández Löbbe created SOLR-9874: --- Summary: CREATEALIAS should fail if target collections don't exist Key: SOLR-9874 URL: https://issues.apache.org/jira/browse/SOLR-9874 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Tomás Fernández Löbbe Assignee: Tomás Fernández Löbbe Priority: Minor As discussed [here|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201612.mbox/%3CCAMJgJxSoY9hovujET0V8D3ywyBf%3DrDZTz9WxZABx-wUYaO4jKg%40mail.gmail.com%3E], we should fail requests to CREATEALIAS if the target collection doesnt't exist -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9873) Function result is compared with itself
[ https://issues.apache.org/jira/browse/SOLR-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-9873. Resolution: Fixed Fix Version/s: 6.4 master (7.0) > Function result is compared with itself > --- > > Key: SOLR-9873 > URL: https://issues.apache.org/jira/browse/SOLR-9873 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: AppChecker >Assignee: Yonik Seeley >Priority: Minor > Fix For: master (7.0), 6.4 > > > Hi! > In the method > [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument1.getFieldNames().size()) { > return false; > } > {code} > "solrDocument1.getFieldNames().size()" compare with itself > Probably, is should be: > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument2.getFieldNames().size()) { > return false; > } > {code} > This possible defect found by [static code analyzer > AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9873) Function result is compared with itself
[ https://issues.apache.org/jira/browse/SOLR-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755815#comment-15755815 ] ASF subversion and git services commented on SOLR-9873: --- Commit 9fafd78ddf56a1fe59b0128d813200e72581d0b0 in lucene-solr's branch refs/heads/branch_6x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9fafd78 ] SOLR-9873: tests - fix SolrTestCaseJ4.compareSolrDocument num fields comparison > Function result is compared with itself > --- > > Key: SOLR-9873 > URL: https://issues.apache.org/jira/browse/SOLR-9873 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: AppChecker >Assignee: Yonik Seeley >Priority: Minor > > Hi! > In the method > [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument1.getFieldNames().size()) { > return false; > } > {code} > "solrDocument1.getFieldNames().size()" compare with itself > Probably, is should be: > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument2.getFieldNames().size()) { > return false; > } > {code} > This possible defect found by [static code analyzer > AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9873) Function result is compared with itself
[ https://issues.apache.org/jira/browse/SOLR-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755812#comment-15755812 ] ASF subversion and git services commented on SOLR-9873: --- Commit dcf202a95813d72b1fd56daa7e30cbf413b891b9 in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dcf202a ] SOLR-9873: tests - fix SolrTestCaseJ4.compareSolrDocument num fields comparison > Function result is compared with itself > --- > > Key: SOLR-9873 > URL: https://issues.apache.org/jira/browse/SOLR-9873 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: AppChecker >Assignee: Yonik Seeley >Priority: Minor > > Hi! > In the method > [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument1.getFieldNames().size()) { > return false; > } > {code} > "solrDocument1.getFieldNames().size()" compare with itself > Probably, is should be: > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument2.getFieldNames().size()) { > return false; > } > {code} > This possible defect found by [static code analyzer > AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Tests-MMAP-master - Build # 225 - Failure
Woops, I'll fix. Mike McCandless http://blog.mikemccandless.com On Fri, Dec 16, 2016 at 4:27 PM, Apache Jenkins Serverwrote: > Build: https://builds.apache.org/job/Lucene-Tests-MMAP-master/225/ > > 1 tests failed. > FAILED: junit.framework.TestSuite.org.apache.lucene.facet.TestFacetQuery > > Error Message: > Clean up static fields (in @AfterClass?) and null them, your test still has > references to classes of which the sizes cannot be measured due to security > restrictions or Java 9 module encapsulation: - private static > org.apache.lucene.index.RandomIndexWriter > org.apache.lucene.facet.TestFacetQuery.indexWriter - private static > org.apache.lucene.index.IndexReader > org.apache.lucene.facet.TestFacetQuery.indexReader > > Stack Trace: > junit.framework.AssertionFailedError: Clean up static fields (in > @AfterClass?) and null them, your test still has references to classes of > which the sizes cannot be measured due to security restrictions or Java 9 > module encapsulation: > - private static org.apache.lucene.index.RandomIndexWriter > org.apache.lucene.facet.TestFacetQuery.indexWriter > - private static org.apache.lucene.index.IndexReader > org.apache.lucene.facet.TestFacetQuery.indexReader > at __randomizedtesting.SeedInfo.seed([F8FE8A3A4746237B]:0) > at > com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:146) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.security.AccessControlException: access denied > ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.fs") > at > java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) > at > java.security.AccessController.checkPermission(AccessController.java:884) > at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) > at > java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564) > at java.lang.Class.checkPackageAccess(Class.java:2372) > at java.lang.Class.checkMemberAccess(Class.java:2351) > at java.lang.Class.getDeclaredFields(Class.java:1915) > at > com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$2.run(RamUsageEstimator.java:585) > at > com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$2.run(RamUsageEstimator.java:582) > at java.security.AccessController.doPrivileged(Native Method) > at > com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:582) > at > com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545) > at > com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387) > at > com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:129) > ... 10 more > > > > > Build Log: > [...truncated 6702 lines...] >[junit4] Suite: org.apache.lucene.facet.TestFacetQuery >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): > {$facets=BlockTreeOrds(blocksize=128), > Hello=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))}, > docValues:{$facets=DocValuesFormat(name=Lucene70)}, > maxPointsInLeafNode=1809, maxMBSortInHeap=7.911551045518415, > sim=RandomSimilarity(queryNorm=false): {Hello=DFR I(ne)B1, $facets=DFR > I(F)B2}, locale=sr-Latn-BA, timezone=Antarctica/DumontDUrville >[junit4] 2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation > 1.8.0_102 (64-bit)/cpus=4,threads=1,free=196624544,total=342360064 >[junit4] 2> NOTE: All tests run in this JVM: > [TestDirectoryTaxonomyWriter, TestDirectoryTaxonomyReader, > TestConcurrentFacetedIndexing, TestFacetQuery] >[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestFacetQuery > -Dtests.seed=F8FE8A3A4746237B
[jira] [Updated] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6989: -- Attachment: LUCENE-6989-v3-post-b148.patch Final patch, will commit this the next days. > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, > LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755661#comment-15755661 ] Julian Hyde commented on SOLR-8593: --- A list of GROUP BY fields would be fine. But it must be in a sub-class Aggregate. Everyone else who is using Aggregate wants "Aggregate([x, y])" to be identical to "Aggregate([y, x])". > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9873) Function result is compared with itself
[ https://issues.apache.org/jira/browse/SOLR-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755613#comment-15755613 ] Yonik Seeley commented on SOLR-9873: Thanks! I'm running all the tests now with this change to see if there were any bad tests that passed because of this. > Function result is compared with itself > --- > > Key: SOLR-9873 > URL: https://issues.apache.org/jira/browse/SOLR-9873 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: AppChecker >Assignee: Yonik Seeley >Priority: Minor > > Hi! > In the method > [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument1.getFieldNames().size()) { > return false; > } > {code} > "solrDocument1.getFieldNames().size()" compare with itself > Probably, is should be: > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument2.getFieldNames().size()) { > return false; > } > {code} > This possible defect found by [static code analyzer > AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9873) Function result is compared with itself
[ https://issues.apache.org/jira/browse/SOLR-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley reassigned SOLR-9873: -- Assignee: Yonik Seeley > Function result is compared with itself > --- > > Key: SOLR-9873 > URL: https://issues.apache.org/jira/browse/SOLR-9873 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: AppChecker >Assignee: Yonik Seeley >Priority: Minor > > Hi! > In the method > [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument1.getFieldNames().size()) { > return false; > } > {code} > "solrDocument1.getFieldNames().size()" compare with itself > Probably, is should be: > {code:title=SolrTestCaseJ4.java|borderStyle=solid} > if(solrDocument1.getFieldNames().size() != > solrDocument2.getFieldNames().size()) { > return false; > } > {code} > This possible defect found by [static code analyzer > AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755577#comment-15755577 ] ASF subversion and git services commented on LUCENE-6989: - Commit 64c6f359949b62fe981255516ba2286c0adcc190 in lucene-solr's branch refs/heads/LUCENE-6989-v2 from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=64c6f35 ] LUCENE-6989: Comments and final cleanup > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Tests-MMAP-master - Build # 225 - Failure
Build: https://builds.apache.org/job/Lucene-Tests-MMAP-master/225/ 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.facet.TestFacetQuery Error Message: Clean up static fields (in @AfterClass?) and null them, your test still has references to classes of which the sizes cannot be measured due to security restrictions or Java 9 module encapsulation: - private static org.apache.lucene.index.RandomIndexWriter org.apache.lucene.facet.TestFacetQuery.indexWriter - private static org.apache.lucene.index.IndexReader org.apache.lucene.facet.TestFacetQuery.indexReader Stack Trace: junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?) and null them, your test still has references to classes of which the sizes cannot be measured due to security restrictions or Java 9 module encapsulation: - private static org.apache.lucene.index.RandomIndexWriter org.apache.lucene.facet.TestFacetQuery.indexWriter - private static org.apache.lucene.index.IndexReader org.apache.lucene.facet.TestFacetQuery.indexReader at __randomizedtesting.SeedInfo.seed([F8FE8A3A4746237B]:0) at com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:146) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.fs") at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564) at java.lang.Class.checkPackageAccess(Class.java:2372) at java.lang.Class.checkMemberAccess(Class.java:2351) at java.lang.Class.getDeclaredFields(Class.java:1915) at com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$2.run(RamUsageEstimator.java:585) at com.carrotsearch.randomizedtesting.rules.RamUsageEstimator$2.run(RamUsageEstimator.java:582) at java.security.AccessController.doPrivileged(Native Method) at com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.createCacheEntry(RamUsageEstimator.java:582) at com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.measureSizeOf(RamUsageEstimator.java:545) at com.carrotsearch.randomizedtesting.rules.RamUsageEstimator.sizeOfAll(RamUsageEstimator.java:387) at com.carrotsearch.randomizedtesting.rules.StaticFieldsInvariantRule$1.afterAlways(StaticFieldsInvariantRule.java:129) ... 10 more Build Log: [...truncated 6702 lines...] [junit4] Suite: org.apache.lucene.facet.TestFacetQuery [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {$facets=BlockTreeOrds(blocksize=128), Hello=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))}, docValues:{$facets=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=1809, maxMBSortInHeap=7.911551045518415, sim=RandomSimilarity(queryNorm=false): {Hello=DFR I(ne)B1, $facets=DFR I(F)B2}, locale=sr-Latn-BA, timezone=Antarctica/DumontDUrville [junit4] 2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation 1.8.0_102 (64-bit)/cpus=4,threads=1,free=196624544,total=342360064 [junit4] 2> NOTE: All tests run in this JVM: [TestDirectoryTaxonomyWriter, TestDirectoryTaxonomyReader, TestConcurrentFacetedIndexing, TestFacetQuery] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestFacetQuery -Dtests.seed=F8FE8A3A4746237B -Dtests.multiplier=2 -Dtests.slow=true -Dtests.directory=MMapDirectory -Dtests.locale=sr-Latn-BA -Dtests.timezone=Antarctica/DumontDUrville -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] ERROR 0.00s J2 | TestFacetQuery (suite) <<< [junit4]> Throwable #1: junit.framework.AssertionFailedError: Clean up static
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755510#comment-15755510 ] ASF subversion and git services commented on LUCENE-6989: - Commit ffc957fdb3c21d110ab23392ed91e74cfc1f169d in lucene-solr's branch refs/heads/LUCENE-6989-v2 from [~thetaphi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ffc957f ] LUCENE-6989: Refactor code and add documentation > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755421#comment-15755421 ] Uwe Schindler edited comment on LUCENE-6989 at 12/16/16 8:31 PM: - Hi, I built a JDK image with that patch and tried to run the Lucene Java9 unmapper: Works! So it's ready! was (Author: thetaphi): Hi, I built a JDK image with that patch and tried to run the Lucene Java9 unmapper: Works: So it's ready! > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 229 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/229/ 6 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test Error Message: There were too many update fails (911 > 60) - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails (911 > 60) - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([564FB800CB832D4C:DE1B87DA657F40B4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:218) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755421#comment-15755421 ] Uwe Schindler commented on LUCENE-6989: --- Hi, I built a JDK image with that patch and tried to run the Lucene Java9 unmapper: Works: So it's ready! > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument
[ https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755215#comment-15755215 ] Anshum Gupta commented on SOLR-6312: Is this still an open issue? Considering only the builder pattern is now a non-deprecated way to construct the client, do we still end up with this when we use _sendDirectUpdatesToAnyShardReplica()_ ? > CloudSolrServer doesn't honor updatesToLeaders constructor argument > --- > > Key: SOLR-6312 > URL: https://issues.apache.org/jira/browse/SOLR-6312 > Project: Solr > Issue Type: Bug >Affects Versions: 4.9 >Reporter: Steve Davids > Fix For: 4.10 > > Attachments: SOLR-6312.patch > > > The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ > requests are being sent to the shard leaders. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755158#comment-15755158 ] Uwe Schindler edited comment on LUCENE-6989 at 12/16/16 6:25 PM: - The Unsafe API addition for unmapping will appear in Java 9 build 150: https://bugs.openjdk.java.net/browse/JDK-8171377 was (Author: thetaphi): The Unmap API addition for unmapping will appear in Java 9 build 150: https://bugs.openjdk.java.net/browse/JDK-8171377 > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755158#comment-15755158 ] Uwe Schindler commented on LUCENE-6989: --- The Unmap API addition for unmapping will appear in Java 9 build 150: https://bugs.openjdk.java.net/browse/JDK-8171377 > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6635) Cursormark should support skipping/goto functionality
[ https://issues.apache.org/jira/browse/SOLR-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755112#comment-15755112 ] Michael Gibney commented on SOLR-6635: -- It's possible that a more general approach to cursorMark could answer this and many other use cases. A couple of observations here, the first two probably obvious: 1. The {{cursorMark}} param defines a relative (contextual, as opposed to absolute-offset) insertion point into a sorted list of documents 2. The {{start}} param defines an offset of returned results _relative to the list index/insertion point_ defined by the {{cursorMark}} param 3. When used in conjunction with {{cursorMark}}, in principle there's no reason why the {{start}} param must be non-negative. The current implementation of {{cursorMark}}/{{nextCursorMark}} is stateless on the server-side, but as far as I can tell it only directly supports serial, forward-only paging. In order for the current implementation to support backward paging in a client application, state must be maintained in the client application (e.g., a stack of cursorMarks by way of which the most recent request was navigated to). If the current cursorMark implementation were tweaked to allow start/offset param to be negative, and to generate and return a totem for the last _and first_ (and possibly for each) document in a result window, this would introduce the possibility of bidirectional paging that is entirely stateless (client-side as well as server-side). It would also enable re-alignment of results, returning target totems in context, and overlapping over-requesting to allow a client application to "preview" whether it has reached the end (or beginning, for backward paging) of paged results. With some further tweaking, this approach could be extended to support arbitrary ("skip to the 'R's") or universal ("skip to the 'last' page of results") totems. As a point of reference, I've been involved in implementing [a related approach|https://github.com/upenn-libraries/solrplugins#2-arbitrary-index-order-result-windows] (leveraging the facet component) that supports goto/paging through arbitrary windows of index-sorted terms (using slightly different parameter syntax: {{target}}, {{offset}}, {{limit}}, and an extra response field specifying the _actual_ {{target_offset}}). As [~hossman] says, "when start > 0 & cursorMark=*, it is functionally equivilent to no cursorMark being specified at all (ie: regular pagination) except that the use of the cursorMark param indicates to Solr that the client wnats the nextCursorMark to be computed." cursorMark is great, and I'd love to use it; I definitely think the introduction of this skipping/goto/offset functionality (esp. with direct support for fully stateless bidirectional paging) would facilitate the backward-compatible migration of client applications to cursorMark-based implementations. > Cursormark should support skipping/goto functionality > - > > Key: SOLR-6635 > URL: https://issues.apache.org/jira/browse/SOLR-6635 > Project: Solr > Issue Type: Improvement > Components: SearchComponents - other >Reporter: Thomas Blitz > Labels: cursormark, pagination, search, solr > Attachments: SOLR-6635.patch > > > Deep pagination is possible with the cursormark. > We have discovered a need to be able to 'skip' a number of results. > Using the cursormark it should be possible to define a request with a skip > parameter, allowing the cursormark to simple skip a number of articles, kinda > like a goto, and then return results from that point in the resultset. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7588: Attachment: LUCENE-7588.patch New patch with the following changes: - Fixes copyright and indentations issues - FacetCollectorManager is no more public. - MultiCollectorManager moved to the right package: org.apache.lucene.search - Add many Javadoc > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > Attachments: LUCENE-7588.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755069#comment-15755069 ] Emmanuel Keller commented on LUCENE-7588: - bq. Can you add a minimal javadocs to ParallelDrillSideways, and include @lucene.experimental? Done. bq. Can you fix the indent to 2 spaces, and change your IDE to not use wildcard imports? (Most of the new classes seem to do so, but at least one didn't). Or we can fix this up before pushing... Done. bq. Should CallableCollector be renamed to CallableCollectorManager? True, done. bq. I assume you're using this for your QWAZR search server built on lucene (https://github.com/qwazr/QWAZR)? Thank you for giving back! With pleasure. I think there is few more contributions to come... bq. There are quite a few new abstractions here, MultiCollectorManager, FacetsCollectorManager; must they be public? Can you explain what they do? MultiCollectorManager do with CollectorManager what MultiCollector do with Collector. It wraps a set of CollectorManager as it was only one. {quote} It seems like this change opens up concurrency in 2 ways; the first way is it uses the IndexSearcher.search API that takes a CollectorManager such that if you had created that IndexSearcher with an executor, you get concurrency across the segments in the index. In general I'm not a huge fan of this concurrency since you are at the whim of how the segments are structured, and, confusingly, running forceMerge(1) on your index removes all concurrency. But it's better than nothing: progress not perfection! {quote} I agree. That's a first step. {quote} The second way is that the new ParallelDrillSideways takes its own executor and then runs the N DrillDown queries concurrently (to compute the sideways counts), which is very different from the current doc-at-a-time computation. Have you compared the performance, using a single thread? ... I'm curious how "doc at a time" vs "query at a time" (which is also Solr's approach) compare. But, still, the fact that this "query at a time" approach enables concurrency is a big win. {quote} I am working on providing a benchmark. What is the good practice for Lucene ? It it okay to provide a benchmark as a test case ? {quote} I wonder if we could absorb ParallelDrillSideways under DrillSideways such that if you pass an executor it uses the concurrent implementation? It's really an implementation/execution detail I think? Similar to how IndexSearcher takes an optional executor. {quote} I agree. I think that it is the way it should be. I don't wanted to be too intrusive. > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7588: Attachment: (was: LUCENE-7588.patch) > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9873) Function result is compared with itself
AppChecker created SOLR-9873: Summary: Function result is compared with itself Key: SOLR-9873 URL: https://issues.apache.org/jira/browse/SOLR-9873 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 6.3 Reporter: AppChecker Priority: Minor Hi! In the method [SolrTestCaseJ4.compareSolrDocument|https://github.com/apache/lucene-solr/blob/c9522a393661c8878d488ad4475ac7e2cbb9c25c/solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java#L1951] {code:title=SolrTestCaseJ4.java|borderStyle=solid} if(solrDocument1.getFieldNames().size() != solrDocument1.getFieldNames().size()) { return false; } {code} "solrDocument1.getFieldNames().size()" compare with itself Probably, is should be: {code:title=SolrTestCaseJ4.java|borderStyle=solid} if(solrDocument1.getFieldNames().size() != solrDocument2.getFieldNames().size()) { return false; } {code} This possible defect found by [static code analyzer AppChecker|http://cnpo.ru/en/solutions/appchecker.php] -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7115) UpdateLog can miss closing transaction log objects.
[ https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley reassigned SOLR-7115: -- Assignee: Yonik Seeley > UpdateLog can miss closing transaction log objects. > --- > > Key: SOLR-7115 > URL: https://issues.apache.org/jira/browse/SOLR-7115 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller >Assignee: Yonik Seeley > Fix For: 6.x, master (7.0) > > Attachments: SOLR-7115-LargeVolumeEmbeddedTest-fail.txt, > SOLR-7115.patch, SOLR-7115.patch, tests-failures-7115.txt > > > I've seen this happen on YourKit and in various tests - especially since > adding resource release tracking to the log objects. Now I've got a test that > catches it in SOLR-7113. > It seems that in precommit, if prevTlog is not null, we need to close it > because we are going to overwrite prevTlog with a new log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7115) UpdateLog can miss closing transaction log objects.
[ https://issues.apache.org/jira/browse/SOLR-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755035#comment-15755035 ] Yonik Seeley commented on SOLR-7115: I don't know if the patch to DUH2 is needed for other reasons, but now that SOLR-9712 is committed, I'll adapt and try out the patch from this issue. > UpdateLog can miss closing transaction log objects. > --- > > Key: SOLR-7115 > URL: https://issues.apache.org/jira/browse/SOLR-7115 > Project: Solr > Issue Type: Bug >Reporter: Mark Miller > Fix For: 6.x, master (7.0) > > Attachments: SOLR-7115-LargeVolumeEmbeddedTest-fail.txt, > SOLR-7115.patch, SOLR-7115.patch, tests-failures-7115.txt > > > I've seen this happen on YourKit and in various tests - especially since > adding resource release tracking to the log objects. Now I've got a test that > catches it in SOLR-7113. > It seems that in precommit, if prevTlog is not null, we need to close it > because we are going to overwrite prevTlog with a new log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-1490) URLDataSource should be able to handle HTTP authentication
[ https://issues.apache.org/jira/browse/SOLR-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754983#comment-15754983 ] Chantal Ackermann edited comment on SOLR-1490 at 12/16/16 5:22 PM: --- It would be nice if the user info from the URL could be used as done in this code snippet: {code:java} URL url = new URL("http://user:p...@domain.com/url;); URLConnection urlConnection = url.openConnection(); if (url.getUserInfo() != null) { String basicAuth = "Basic " + new String(new Base64().encode(url.getUserInfo().getBytes())); urlConnection.setRequestProperty("Authorization", basicAuth); } InputStream inputStream = urlConnection.getInputStream(); {code} taken from http://stackoverflow.com/a/13122190/621690 was (Author: chantal): It would be nice if the user info from the URL could be used as done in this code snippet: {code:java} URL url = new URL("http://user:p...@domain.com/url;); URLConnection urlConnection = url.openConnection(); if (url.getUserInfo() != null) { String basicAuth = "Basic " + new String(new Base64().encode(url.getUserInfo().getBytes())); urlConnection.setRequestProperty("Authorization", basicAuth); } {code} taken from http://stackoverflow.com/a/13122190/621690 InputStream inputStream = urlConnection.getInputStream(); > URLDataSource should be able to handle HTTP authentication > -- > > Key: SOLR-1490 > URL: https://issues.apache.org/jira/browse/SOLR-1490 > Project: Solr > Issue Type: Improvement > Components: contrib - DataImportHandler >Reporter: Adam Foltzer >Assignee: Noble Paul > Attachments: SOLR-1490.patch > > > Right now, there seems to be no way to provide HTTP authentication > (username/password) to the URLDataSource. This makes any password-protected > data sources inaccessible for indexing. I would try and add support myself, > but with all things security-related, I'm fearful of shooting myself in the > foot with systems I don't fully understand. Thanks for your time/feedback! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9836) Add more graceful recovery steps when failing to create SolrCore
[ https://issues.apache.org/jira/browse/SOLR-9836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15755004#comment-15755004 ] Mike Drob commented on SOLR-9836: - bq. This should already be Lucene's behavior. I assume if it's not falling back it's because there is no previous segments file to fall back to. I didn't see Lucene doing this. Or at least, I didn't see Solr leverage Lucene to do this. Both through manual inspection of the code and through testing via {{MissingSegmentRecoveryTest::testRollback}} in my patch. > Add more graceful recovery steps when failing to create SolrCore > > > Key: SOLR-9836 > URL: https://issues.apache.org/jira/browse/SOLR-9836 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Mike Drob > Attachments: SOLR-9836.patch, SOLR-9836.patch > > > I have seen several cases where there is a zero-length segments_n file. We > haven't identified the root cause of these issues (possibly a poorly timed > crash during replication?) but if there is another node available then Solr > should be able to recover from this situation. Currently, we log and give up > on loading that core, leaving the user to manually intervene. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers
[ https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754990#comment-15754990 ] Yonik Seeley commented on SOLR-9712: Nope... I hadn't realized we had a good reproducible test for that. Looks like that test expects to hit an exception though, so it would need to be tweaked to pass now. > Saner default for maxWarmingSearchers > - > > Key: SOLR-9712 > URL: https://issues.apache.org/jira/browse/SOLR-9712 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Reporter: Shalin Shekhar Mangar >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch > > > As noted in SOLR-9710, the default for maxWarmingSearchers is > Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we > log a performance warning when the number of on deck searchers goes over 1. > What if we had the default as 1 that expert users can increase if needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-1490) URLDataSource should be able to handle HTTP authentication
[ https://issues.apache.org/jira/browse/SOLR-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754983#comment-15754983 ] Chantal Ackermann commented on SOLR-1490: - It would be nice if the user info from the URL could be used as done in this code snippet: {code:java} URL url = new URL("http://user:p...@domain.com/url;); URLConnection urlConnection = url.openConnection(); if (url.getUserInfo() != null) { String basicAuth = "Basic " + new String(new Base64().encode(url.getUserInfo().getBytes())); urlConnection.setRequestProperty("Authorization", basicAuth); } {code} taken from http://stackoverflow.com/a/13122190/621690 InputStream inputStream = urlConnection.getInputStream(); > URLDataSource should be able to handle HTTP authentication > -- > > Key: SOLR-1490 > URL: https://issues.apache.org/jira/browse/SOLR-1490 > Project: Solr > Issue Type: Improvement > Components: contrib - DataImportHandler >Reporter: Adam Foltzer >Assignee: Noble Paul > Attachments: SOLR-1490.patch > > > Right now, there seems to be no way to provide HTTP authentication > (username/password) to the URLDataSource. This makes any password-protected > data sources inaccessible for indexing. I would try and add support myself, > but with all things security-related, I'm fearful of shooting myself in the > foot with systems I don't fully understand. Thanks for your time/feedback! -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+147) - Build # 2436 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2436/ Java: 64bit/jdk-9-ea+147 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail Error Message: expected:<200> but was:<404> Stack Trace: java.lang.AssertionError: expected:<200> but was:<404> at __randomizedtesting.SeedInfo.seed([FF652194F0E6FB5:674967339F947D59]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:141) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:305) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:538) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers
[ https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754927#comment-15754927 ] Mikhail Khludnev commented on SOLR-9712: @yonik have you tried the test from SOLR-7115 with it? > Saner default for maxWarmingSearchers > - > > Key: SOLR-9712 > URL: https://issues.apache.org/jira/browse/SOLR-9712 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Reporter: Shalin Shekhar Mangar >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch > > > As noted in SOLR-9710, the default for maxWarmingSearchers is > Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we > log a performance warning when the number of on deck searchers goes over 1. > What if we had the default as 1 that expert users can increase if needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9712) Saner default for maxWarmingSearchers
[ https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-9712. Resolution: Fixed Assignee: Yonik Seeley > Saner default for maxWarmingSearchers > - > > Key: SOLR-9712 > URL: https://issues.apache.org/jira/browse/SOLR-9712 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Reporter: Shalin Shekhar Mangar >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch > > > As noted in SOLR-9710, the default for maxWarmingSearchers is > Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we > log a performance warning when the number of on deck searchers goes over 1. > What if we had the default as 1 that expert users can increase if needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers
[ https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754925#comment-15754925 ] ASF subversion and git services commented on SOLR-9712: --- Commit 0f4c5f0a732cb0df3a213d05dca8b7c477728154 in lucene-solr's branch refs/heads/branch_6x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f4c5f0 ] SOLR-9712: block when maxWarmingSearchers is exceeded instead of throwing exception, default to 1, remove from most configs > Saner default for maxWarmingSearchers > - > > Key: SOLR-9712 > URL: https://issues.apache.org/jira/browse/SOLR-9712 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Reporter: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch > > > As noted in SOLR-9710, the default for maxWarmingSearchers is > Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we > log a performance warning when the number of on deck searchers goes over 1. > What if we had the default as 1 that expert users can increase if needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9712) Saner default for maxWarmingSearchers
[ https://issues.apache.org/jira/browse/SOLR-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754917#comment-15754917 ] ASF subversion and git services commented on SOLR-9712: --- Commit c9522a393661c8878d488ad4475ac7e2cbb9c25c in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c9522a3 ] SOLR-9712: block when maxWarmingSearchers is exceeded instead of throwing exception, default to 1, remove from most configs > Saner default for maxWarmingSearchers > - > > Key: SOLR-9712 > URL: https://issues.apache.org/jira/browse/SOLR-9712 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Reporter: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9712.patch, SOLR-9712.patch, SOLR-9712.patch > > > As noted in SOLR-9710, the default for maxWarmingSearchers is > Integer.MAX_VALUE which is just crazy. Let's have a saner default. Today we > log a performance warning when the number of on deck searchers goes over 1. > What if we had the default as 1 that expert users can increase if needed? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_112) - Build # 6293 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6293/ Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: expected:<204> but was:<181> Stack Trace: java.lang.AssertionError: expected:<204> but was:<181> at __randomizedtesting.SeedInfo.seed([591233F6A04B871D:D1460C2C0EB7EAE5]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:280) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Closed] (SOLR-9862) Cannot start Solr on Solaris/Super Cluster
[ https://issues.apache.org/jira/browse/SOLR-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Moenieb closed SOLR-9862. - Resolution: Workaround > Cannot start Solr on Solaris/Super Cluster > -- > > Key: SOLR-9862 > URL: https://issues.apache.org/jira/browse/SOLR-9862 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: Moenieb > Labels: beginner, newbie, security > > Solr: 6.3 > OS: Solaris > JAVA: 1.8.0_111 > Hardware: Oracle Super Cluster > When i start Solr, I get a message that does not allow Solr to startup, See > cmd output below > P.S: I am BRAND new to Solr. I installed and actually played with the > previous version on exactly the same environment and had no issues > root@XXX:/u02/solr# java -version > java version "1.8.0_111" > Java(TM) SE Runtime Environment (build 1.8.0_111-b14) > Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode) > root@XXX:/u02/solr# echo $JAVA_HOME > /u03/software/jdk1.8.0_111 > root@XXX:/u02/solr# echo $PATH > /u03/software/jdk1.8.0_111/bin:/usr/sbin:/usr/bin > root@XXX:/u02/solr# bin/solr start > awk: can't open /version/ {print $2} > Your current version of Java is too old to run this version of Solr > We found version , using command '/u03/software/jdk1.8.0_111/bin/java' > Please install latest version of Java 8 or set JAVA_HOME properly. > Debug information: > JAVA_HOME: /u03/software/jdk1.8.0_111 > Active Path: > /u03/software/jdk1.8.0_111/bin:/usr/sbin:/usr/bin > root@XXX:/u02/solr# -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9805) Use metrics-jvm library to instrument jvm internals
[ https://issues.apache.org/jira/browse/SOLR-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754833#comment-15754833 ] ASF subversion and git services commented on SOLR-9805: --- Commit 54e35102fe0d18f8a14b3cbd1d368c5d47cfb706 in lucene-solr's branch refs/heads/feature/metrics from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=54e3510 ] SOLR-9805: Added sha, notice and license files for metrics-jvm library > Use metrics-jvm library to instrument jvm internals > --- > > Key: SOLR-9805 > URL: https://issues.apache.org/jira/browse/SOLR-9805 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9805.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > See http://metrics.dropwizard.io/3.1.0/manual/jvm/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9812) Implement a /admin/metrics API
[ https://issues.apache.org/jira/browse/SOLR-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754836#comment-15754836 ] ASF subversion and git services commented on SOLR-9812: --- Commit 5a17c1b5c56195eebc45c19452a4ec92e5d742fb in lucene-solr's branch refs/heads/feature/metrics from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a17c1b ] SOLR-9812: Added entry to CHANGES.txt > Implement a /admin/metrics API > -- > > Key: SOLR-9812 > URL: https://issues.apache.org/jira/browse/SOLR-9812 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9812.patch, SOLR-9812.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > We added a bare bones metrics API in SOLR-9788 but due to limitations with > the metrics servlet supplied by the metrics library, it can show statistics > from only one metric registry. SOLR-4735 has added a hierarchy of metric > registries and the /admin/metrics API should support showing all of them as > well as be able to filter metrics from a given registry name. > In this issue we will implement the improved /admin/metrics API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4735) Improve Solr metrics reporting
[ https://issues.apache.org/jira/browse/SOLR-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754834#comment-15754834 ] ASF subversion and git services commented on SOLR-4735: --- Commit 5f0637cc8569768ac9ce2a38cef5405163a974c0 in lucene-solr's branch refs/heads/feature/metrics from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5f0637c ] SOLR-4735: Use method in java.util.Objects instead of the forbidden methods in Guava's Preconditions class > Improve Solr metrics reporting > -- > > Key: SOLR-4735 > URL: https://issues.apache.org/jira/browse/SOLR-4735 > Project: Solr > Issue Type: Improvement > Components: metrics >Reporter: Alan Woodward >Assignee: Andrzej Bialecki >Priority: Minor > Attachments: SOLR-4735.patch, SOLR-4735.patch, SOLR-4735.patch, > SOLR-4735.patch, SOLR-4735.patch, screenshot-2.png > > > Following on from a discussion on the mailing list: > http://search-lucene.com/m/IO0EI1qdyJF1/codahale=Solr+metrics+in+Codahale+metrics+and+Graphite+ > It would be good to make Solr play more nicely with existing devops > monitoring systems, such as Graphite or Ganglia. Stats monitoring at the > moment is poll-only, either via JMX or through the admin stats page. I'd > like to refactor things a bit to make this more pluggable. > This patch is a start. It adds a new interface, InstrumentedBean, which > extends SolrInfoMBean to return a > [[Metrics|http://metrics.codahale.com/manual/core/]] MetricRegistry, and a > couple of MetricReporters (which basically just duplicate the JMX and admin > page reporting that's there at the moment, but which should be more > extensible). The patch includes a change to RequestHandlerBase showing how > this could work. The idea would be to eventually replace the getStatistics() > call on SolrInfoMBean with this instead. > The next step would be to allow more MetricReporters to be defined in > solrconfig.xml. The Metrics library comes with ganglia and graphite > reporting modules, and we can add contrib plugins for both of those. > There's some more general cleanup that could be done around SolrInfoMBean > (we've got two plugin handlers at /mbeans and /plugins that basically do the > same thing, and the beans themselves have some weirdly inconsistent data on > them - getVersion() returns different things for different impls, and > getSource() seems pretty useless), but maybe that's for another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9812) Implement a /admin/metrics API
[ https://issues.apache.org/jira/browse/SOLR-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754835#comment-15754835 ] ASF subversion and git services commented on SOLR-9812: --- Commit aa9b02bb16afe2af8c2437ffab46f4a09bda684e in lucene-solr's branch refs/heads/feature/metrics from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=aa9b02b ] SOLR-9812: Added a new /admin/metrics API to return all metrics collected by Solr via API > Implement a /admin/metrics API > -- > > Key: SOLR-9812 > URL: https://issues.apache.org/jira/browse/SOLR-9812 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9812.patch, SOLR-9812.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > We added a bare bones metrics API in SOLR-9788 but due to limitations with > the metrics servlet supplied by the metrics library, it can show statistics > from only one metric registry. SOLR-4735 has added a hierarchy of metric > registries and the /admin/metrics API should support showing all of them as > well as be able to filter metrics from a given registry name. > In this issue we will implement the improved /admin/metrics API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9812) Implement a /admin/metrics API
[ https://issues.apache.org/jira/browse/SOLR-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-9812: Attachment: SOLR-9812.patch Patch passes precommit and all tests. > Implement a /admin/metrics API > -- > > Key: SOLR-9812 > URL: https://issues.apache.org/jira/browse/SOLR-9812 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9812.patch, SOLR-9812.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > We added a bare bones metrics API in SOLR-9788 but due to limitations with > the metrics servlet supplied by the metrics library, it can show statistics > from only one metric registry. SOLR-4735 has added a hierarchy of metric > registries and the /admin/metrics API should support showing all of them as > well as be able to filter metrics from a given registry name. > In this issue we will implement the improved /admin/metrics API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18543 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18543/ Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth Error Message: There are still nodes recoverying - waited for 10 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 10 seconds at __randomizedtesting.SeedInfo.seed([BBB0D8F724242CF:3B0594DE92F4C56E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:184) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418) at org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth(PKIAuthenticationIntegrationTest.java:50) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Resolved] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-7587. Resolution: Fixed Fix Version/s: (was: 6.3.1) 6.4 Thanks [~ekeller]! > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7587.patch > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754621#comment-15754621 ] ASF subversion and git services commented on LUCENE-7587: - Commit a11cdd2fd8ca17e8a2e4f78431d347c58dd36353 in lucene-solr's branch refs/heads/branch_6x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a11cdd2 ] LUCENE-7587: add helper FacetQuery and MultiFacetQuery classes to simplify drill down implementation > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > Attachments: LUCENE-7587.patch > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754617#comment-15754617 ] ASF subversion and git services commented on LUCENE-7587: - Commit 835296f20a17c12c66b4f043074c94e3ddd5c2b5 in lucene-solr's branch refs/heads/master from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=835296f ] LUCENE-7587: add helper FacetQuery and MultiFacetQuery classes to simplify drill down implementation > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > Attachments: LUCENE-7587.patch > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 558 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/558/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor160.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005) at org.apache.solr.core.SolrCore.(SolrCore.java:870) at org.apache.solr.core.SolrCore.(SolrCore.java:774) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor160.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:704) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:766) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1005) at org.apache.solr.core.SolrCore.(SolrCore.java:870) at org.apache.solr.core.SolrCore.(SolrCore.java:774) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:842) at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:498) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([53EFA0D1A06992C1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266) at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (LUCENE-7589) Prevent outliers from raising the number of bits of everyone with numeric doc values
[ https://issues.apache.org/jira/browse/LUCENE-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754558#comment-15754558 ] Adrien Grand commented on LUCENE-7589: -- Like Mike predicted, this helped the NYC taxi bench a bit, Disk usage for the dropoff datetime field went from 194MB to 166MB: http://people.apache.org/~mikemccand/lucenebench/sparseResults.html#index_size_by_field > Prevent outliers from raising the number of bits of everyone with numeric doc > values > > > Key: LUCENE-7589 > URL: https://issues.apache.org/jira/browse/LUCENE-7589 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Adrien Grand >Priority: Minor > Fix For: master (7.0) > > Attachments: LUCENE-7589.patch > > > Today we encode entire segments with a single number of bits per value. It > was done this way because it was faster, but it also means a single outlier > can significantly increase the space requirements. I think we should have > protection against that. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2435 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2435/ Java: 32bit/jdk1.8.0_112 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: Expected 2 of 3 replicas to be active but only found 1; [core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:41352","node_name":"127.0.0.1:41352_","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "state":"down", "base_url":"http://127.0.0.1:36443;, "core":"c8n_1x3_lf_shard1_replica1", "node_name":"127.0.0.1:36443_"}, "core_node2":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:41352;, "node_name":"127.0.0.1:41352_", "state":"active", "leader":"true"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica2", "base_url":"http://127.0.0.1:37726;, "node_name":"127.0.0.1:37726_", "state":"down", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} Stack Trace: java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 1; [core_node2:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:41352","node_name":"127.0.0.1:41352_","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/17)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "state":"down", "base_url":"http://127.0.0.1:36443;, "core":"c8n_1x3_lf_shard1_replica1", "node_name":"127.0.0.1:36443_"}, "core_node2":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:41352;, "node_name":"127.0.0.1:41352_", "state":"active", "leader":"true"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica2", "base_url":"http://127.0.0.1:37726;, "node_name":"127.0.0.1:37726_", "state":"down", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} at __randomizedtesting.SeedInfo.seed([4252C772B8A8F31F:CA06F8A816549EE7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at
[jira] [Commented] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754482#comment-15754482 ] Michael McCandless commented on LUCENE-7587: Thanks [~ekeller], this looks great, I'll push shortly. > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > Attachments: LUCENE-7587.patch > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754460#comment-15754460 ] Joel Bernstein commented on SOLR-8593: -- The criteria for switching between facet and MapReduce would be cardinality. So a planner rule that is based on the SQL structure won't work in this scenario. I'm thinking the easiest approach might be to add a List of GROUP BY fields to the Aggregate class. Or possibly to add ordering information to the GroupSet BitSet. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1183 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1183/ 9 tests failed. FAILED: org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap Error Message: Document mismatch on target after sync expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: Document mismatch on target after sync expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([4578C3859E28A3B6:92AFECF22A773BF1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap(CdcrBootstrapTest.java:134) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testResilienceWithDeleteByQueryOnTarget Error Message:
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1015 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1015/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.update.AutoCommitTest.testMaxTime Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([A2458486BB3F2FAB:38B1F96425A5B397]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:818) at org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound=1] xml response was: 00 request was:q=id:529=standard=0=20=2.2 at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:811) ... 40 more Build Log: [...truncated 11641 lines...] [junit4] Suite: org.apache.solr.update.AutoCommitTest
[jira] [Updated] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7587: Attachment: LUCENE-7587.patch The new patch includes the Javadoc. Thanks [~mikemccand] for your suggestions. > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > Attachments: LUCENE-7587.patch > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7587) New FacetQuery and MultiFacetQuery
[ https://issues.apache.org/jira/browse/LUCENE-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7587: Attachment: (was: LUCENE-7587.patch) > New FacetQuery and MultiFacetQuery > -- > > Key: LUCENE-7587 > URL: https://issues.apache.org/jira/browse/LUCENE-7587 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.3.1 > > > This patch introduces two convenient queries: FacetQuery and MultiFacetQuery. > It can be useful to be able to filter a complex query on one or many facet > value. > - FacetQuery acts as a TermQuery on a FacetField. > - MultiFacetQuery acts as a MultiTermQuery on a FacetField. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9812) Implement a /admin/metrics API
[ https://issues.apache.org/jira/browse/SOLR-9812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-9812: Attachment: SOLR-9812.patch Patch which adds a new MetricsHandler The /admin/metrics API supports the following parameters: # group=all,jvm,jetty,node,core -- multiple group parameters can be specified and it also accepts comma-separated values. (default is 'all') # type=all,counter,gauge,histogram,meter,timer -- similar to the 'group' parameter, both multiple 'type' parameters as well as comma-separated values can be specified > Implement a /admin/metrics API > -- > > Key: SOLR-9812 > URL: https://issues.apache.org/jira/browse/SOLR-9812 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9812.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > We added a bare bones metrics API in SOLR-9788 but due to limitations with > the metrics servlet supplied by the metrics library, it can show statistics > from only one metric registry. SOLR-4735 has added a hierarchy of metric > registries and the /admin/metrics API should support showing all of them as > well as be able to filter metrics from a given registry name. > In this issue we will implement the improved /admin/metrics API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9872) raf.setLength(0) in transactionLog is unreachable
Cao Manh Dat created SOLR-9872: -- Summary: raf.setLength(0) in transactionLog is unreachable Key: SOLR-9872 URL: https://issues.apache.org/jira/browse/SOLR-9872 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat When I look at {{TransactionLog.java}} I found these lines of code in constructor {code} if (start > 0) { log.warn("New transaction log already exists:" + tlogFile + " size=" + raf.length()); return; } if (start > 0) { raf.setLength(0); } addGlobalStrings(globalStrings); {code} It seems, we will never reach {{raf.setLength(0)}}? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7579) Sorting on flushed segment
[ https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15754026#comment-15754026 ] Adrien Grand commented on LUCENE-7579: -- +1 > Sorting on flushed segment > -- > > Key: LUCENE-7579 > URL: https://issues.apache.org/jira/browse/LUCENE-7579 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ferenczi Jim > > Today flushed segments built by an index writer with an index sort specified > are not sorted. The merge is responsible of sorting these segments > potentially with others that are already sorted (resulted from another > merge). > I'd like to investigate the cost of sorting the segment directly during the > flush. This could make the merge faster since they are some cheap > optimizations that can be done only if all segments to be merged are sorted. > For instance the merge of the points could use the bulk merge instead of > rebuilding the points from scratch. > I made a small prototype which sort the segment on flush here: > https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort > The idea is simple, for points, norms, docvalues and terms I use the > SortingLeafReader implementation to translate the values that we have in RAM > in a sorted enumeration for the writers. > For stored fields I use a two pass scheme where the documents are first > written to disk unsorted and then copied to another file with the correct > sorting. I use the same stored field format for the two steps and just remove > the file produced by the first pass at the end of the process. > This prototype has no implementation for index sorting that use term vectors > yet. I'll add this later if the tests are good enough. > Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts > and compared master with index sorting against my branch with index sorting > on flush. I tried with sparsetaxis and wikipedia and the first results are > weird. When I use the SerialScheduler and only one thread to write the docs, > index sorting on flush is slower. But when I use two threads the sorting on > flush is much faster even with the SerialScheduler. I'll continue to run the > tests in order to be able to share something more meaningful. > The tests are passing except one about concurrent DV updates. I don't know > this part at all so I did not fix the test yet. I don't even know if we can > make it work with index sorting ;). > [~mikemccand] I would love to have your feedback about the prototype. Could > you please take a look ? I am sure there are plenty of bugs, ... but I think > it's a good start to evaluate the feasibility of this feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7579) Sorting on flushed segment
[ https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753993#comment-15753993 ] Ferenczi Jim commented on LUCENE-7579: -- This new API is maybe a premature optim that should not be part of this change. What about removing the API and rollback to a non optimized copy that "visits" each doc and copy it like the StoredFieldsReader is doing? This way the function would be private on the StoredFieldsConsumer. We can still add the optimization you're describing later but it can be confusing if the writes of the index writer are not compressed the same way than the other writes for stored fields ? > Sorting on flushed segment > -- > > Key: LUCENE-7579 > URL: https://issues.apache.org/jira/browse/LUCENE-7579 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ferenczi Jim > > Today flushed segments built by an index writer with an index sort specified > are not sorted. The merge is responsible of sorting these segments > potentially with others that are already sorted (resulted from another > merge). > I'd like to investigate the cost of sorting the segment directly during the > flush. This could make the merge faster since they are some cheap > optimizations that can be done only if all segments to be merged are sorted. > For instance the merge of the points could use the bulk merge instead of > rebuilding the points from scratch. > I made a small prototype which sort the segment on flush here: > https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort > The idea is simple, for points, norms, docvalues and terms I use the > SortingLeafReader implementation to translate the values that we have in RAM > in a sorted enumeration for the writers. > For stored fields I use a two pass scheme where the documents are first > written to disk unsorted and then copied to another file with the correct > sorting. I use the same stored field format for the two steps and just remove > the file produced by the first pass at the end of the process. > This prototype has no implementation for index sorting that use term vectors > yet. I'll add this later if the tests are good enough. > Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts > and compared master with index sorting against my branch with index sorting > on flush. I tried with sparsetaxis and wikipedia and the first results are > weird. When I use the SerialScheduler and only one thread to write the docs, > index sorting on flush is slower. But when I use two threads the sorting on > flush is much faster even with the SerialScheduler. I'll continue to run the > tests in order to be able to share something more meaningful. > The tests are passing except one about concurrent DV updates. I don't know > this part at all so I did not fix the test yet. I don't even know if we can > make it work with index sorting ;). > [~mikemccand] I would love to have your feedback about the prototype. Could > you please take a look ? I am sure there are plenty of bugs, ... but I think > it's a good start to evaluate the feasibility of this feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7579) Sorting on flushed segment
[ https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753957#comment-15753957 ] Adrien Grand commented on LUCENE-7579: -- bq. I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case. I was thinking about it too and I suspect the optimization does not bring much in the case that blocks contain multiple documents (ie. small docs) since I would expect the fact that sorting the stored fields format keeps decompressing blocks of 16KB for every single document to be the bottleneck? Maybe we should not try to reuse the codec's stored fields format for the temporary stored fields and rather do the buffering in memory or on disk with a custom format that has faster random-access? I would expect it to be faster in many cases, and would allow to get rid of this new API? > Sorting on flushed segment > -- > > Key: LUCENE-7579 > URL: https://issues.apache.org/jira/browse/LUCENE-7579 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ferenczi Jim > > Today flushed segments built by an index writer with an index sort specified > are not sorted. The merge is responsible of sorting these segments > potentially with others that are already sorted (resulted from another > merge). > I'd like to investigate the cost of sorting the segment directly during the > flush. This could make the merge faster since they are some cheap > optimizations that can be done only if all segments to be merged are sorted. > For instance the merge of the points could use the bulk merge instead of > rebuilding the points from scratch. > I made a small prototype which sort the segment on flush here: > https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort > The idea is simple, for points, norms, docvalues and terms I use the > SortingLeafReader implementation to translate the values that we have in RAM > in a sorted enumeration for the writers. > For stored fields I use a two pass scheme where the documents are first > written to disk unsorted and then copied to another file with the correct > sorting. I use the same stored field format for the two steps and just remove > the file produced by the first pass at the end of the process. > This prototype has no implementation for index sorting that use term vectors > yet. I'll add this later if the tests are good enough. > Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts > and compared master with index sorting against my branch with index sorting > on flush. I tried with sparsetaxis and wikipedia and the first results are > weird. When I use the SerialScheduler and only one thread to write the docs, > index sorting on flush is slower. But when I use two threads the sorting on > flush is much faster even with the SerialScheduler. I'll continue to run the > tests in order to be able to share something more meaningful. > The tests are passing except one about concurrent DV updates. I don't know > this part at all so I did not fix the test yet. I don't even know if we can > make it work with index sorting ;). > [~mikemccand] I would love to have your feedback about the prototype. Could > you please take a look ? I am sure there are plenty of bugs, ... but I think > it's a good start to evaluate the feasibility of this feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7579) Sorting on flushed segment
[ https://issues.apache.org/jira/browse/LUCENE-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753913#comment-15753913 ] Ferenczi Jim commented on LUCENE-7579: -- {quote} CompressingStoredFieldsWriter.sort should always have a CompressingStoredFieldsReader as an input, since the codec cannot change in the middle of the flush, so I think we should be able to skip the instanceof check? {quote} That's true for the only call we make to this new API but since it's public it could be call with a different fields reader in another use case ? I am not happy that I had to add this new public API in the StoredFieldsReader but it's the only way to make this optimized for the compressing case. {quote} It would personally help me to have comments eg. in MergeState.maybeSortReaders that the indexSort==null case may only happen for bwc reasons. Maybe we should also assert that if index sorting is configured, then the non-sorted segments can only have 6.2 or 6.3 as a version {quote} Agreed, I'll add an assert for the non-sorted case. I'll also add a comment to make it clear that index==null is handled for BWC reason in maybeSortReader. Thanks for having a look [~jpountz] > Sorting on flushed segment > -- > > Key: LUCENE-7579 > URL: https://issues.apache.org/jira/browse/LUCENE-7579 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ferenczi Jim > > Today flushed segments built by an index writer with an index sort specified > are not sorted. The merge is responsible of sorting these segments > potentially with others that are already sorted (resulted from another > merge). > I'd like to investigate the cost of sorting the segment directly during the > flush. This could make the merge faster since they are some cheap > optimizations that can be done only if all segments to be merged are sorted. > For instance the merge of the points could use the bulk merge instead of > rebuilding the points from scratch. > I made a small prototype which sort the segment on flush here: > https://github.com/apache/lucene-solr/compare/master...jimczi:flush_sort > The idea is simple, for points, norms, docvalues and terms I use the > SortingLeafReader implementation to translate the values that we have in RAM > in a sorted enumeration for the writers. > For stored fields I use a two pass scheme where the documents are first > written to disk unsorted and then copied to another file with the correct > sorting. I use the same stored field format for the two steps and just remove > the file produced by the first pass at the end of the process. > This prototype has no implementation for index sorting that use term vectors > yet. I'll add this later if the tests are good enough. > Speaking of testing, I tried this branch on [~mikemccand] benchmark scripts > and compared master with index sorting against my branch with index sorting > on flush. I tried with sparsetaxis and wikipedia and the first results are > weird. When I use the SerialScheduler and only one thread to write the docs, > index sorting on flush is slower. But when I use two threads the sorting on > flush is much faster even with the SerialScheduler. I'll continue to run the > tests in order to be able to share something more meaningful. > The tests are passing except one about concurrent DV updates. I don't know > this part at all so I did not fix the test yet. I don't even know if we can > make it work with index sorting ;). > [~mikemccand] I would love to have your feedback about the prototype. Could > you please take a look ? I am sure there are plenty of bugs, ... but I think > it's a good start to evaluate the feasibility of this feature. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9871) Java-level deadlock on SolrCore.getSearcher
tomlewlit created SOLR-9871: --- Summary: Java-level deadlock on SolrCore.getSearcher Key: SOLR-9871 URL: https://issues.apache.org/jira/browse/SOLR-9871 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Affects Versions: 6.2.1 Reporter: tomlewlit I have a SolrCloud setup with 2 nodes, 4 shards, 2 replicas, ~200mln documents. Sometimes one of the nodes hangs, jstack tells about java-level deadlock: {noformat} Found one Java-level deadlock: = "qtp1543727556-8467": waiting to lock monitor 0x7f5b6c9d8378 (object 0x00050dc96730, a java.lang.Object), which is held by "searcherExecutor-8-thread-1-processing-n:172.19.123.3:18984_solr x:mail_shard1_replica2 s:shard1 c:mail r:core_node8" "searcherExecutor-8-thread-1-processing-n:172.19.123.3:18984_solr x:mail_shard1_replica2 s:shard1 c:mail r:core_node8": waiting to lock monitor 0x7f5aac41cd38 (object 0x00050ded63e8, a org.apache.solr.update.SolrIndexWriter), which is held by "commitScheduler-21-thread-1" "commitScheduler-21-thread-1": waiting to lock monitor 0x7f5b6c9d8378 (object 0x00050dc96730, a java.lang.Object), which is held by "searcherExecutor-8-thread-1-processing-n:172.19.123.3:18984_solr x:mail_shard1_replica2 s:shard1 c:mail r:core_node8" Java stack information for the threads listed above: === "qtp1543727556-8467": at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1749) - waiting to lock <0x00050dc96730> (a java.lang.Object) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1552) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1487) at org.apache.solr.request.SolrQueryRequestBase.getSearcher(SolrQueryRequestBase.java:115) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:308) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2089) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:459) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:518) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) at java.lang.Thread.run(Thread.java:745) "searcherExecutor-8-thread-1-processing-n:172.19.123.3:18984_solr x:mail_shard1_replica2 s:shard1 c:mail
[jira] [Commented] (LUCENE-7595) RAMUsageTester in test-framework and static field checker no longer works with Java 9
[ https://issues.apache.org/jira/browse/LUCENE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753834#comment-15753834 ] Dawid Weiss commented on LUCENE-7595: - I agree certain classes could be approximated (like String, Lists, etc.). bq. Disallow any static field in tests that is not final (constant) and points to an Object except: Strings and native (wrapper) types. The check could be less strict -- we could fail if the value of such a field is non-null after the test and permit nullified reference fields. > RAMUsageTester in test-framework and static field checker no longer works > with Java 9 > - > > Key: LUCENE-7595 > URL: https://issues.apache.org/jira/browse/LUCENE-7595 > Project: Lucene - Core > Issue Type: Bug > Components: general/test >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > > Lucene/Solr tests have a special rule that records memory usage in static > fields before and after test, so we can detect memory leaks. This check dives > into JDK classes (like java.lang.String to detect their size). As Java 9 > build 148 completely forbids setAccessible on any runtime class, we have to > change or disable this check: > - As first step I will only add the rule to LTC, if we not have Java 8 > - As a second step we might investigate how to improve this > [~rcmuir] had some ideas for the 2nd point: > - Don't dive into classes from JDK modules and instead "estimate" the size > for some special cases (like Strings) > - Disallow any static field in tests that is not final (constant) and points > to an Object except: Strings and native (wrapper) types. > In addition we also have RAMUsageTester, that has similar problems and is > used to compare estimations of Lucene's calculations of > Codec/IndexWriter/IndexReader memory usage with reality. We should simply > disable those tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7596) Update Groovy to 2.4.8 in build system
[ https://issues.apache.org/jira/browse/LUCENE-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-7596: -- Labels: Java9 (was: ) > Update Groovy to 2.4.8 in build system > -- > > Key: LUCENE-7596 > URL: https://issues.apache.org/jira/browse/LUCENE-7596 > Project: Lucene - Core > Issue Type: Bug > Components: general/build >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > > The current version of Groovy used by several Ant components is incompatible > with Java 9 build 148+. We need to update to 2.4.8 once it is released: > http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-December/010474.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753826#comment-15753826 ] Uwe Schindler commented on LUCENE-6989: --- I opened LUCENE-7596. > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7596) Update Groovy to 2.4.8 in build system
Uwe Schindler created LUCENE-7596: - Summary: Update Groovy to 2.4.8 in build system Key: LUCENE-7596 URL: https://issues.apache.org/jira/browse/LUCENE-7596 Project: Lucene - Core Issue Type: Bug Components: general/build Reporter: Uwe Schindler Assignee: Uwe Schindler The current version of Groovy used by several Ant components is incompatible with Java 9 build 148+. We need to update to 2.4.8 once it is released: http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-December/010474.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7508) [smartcn] tokens are not correctly created if text length > 1024
[ https://issues.apache.org/jira/browse/LUCENE-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753817#comment-15753817 ] peina commented on LUCENE-7508: --- glad to know my previous fix gave you at least some hint :) > [smartcn] tokens are not correctly created if text length > 1024 > > > Key: LUCENE-7508 > URL: https://issues.apache.org/jira/browse/LUCENE-7508 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Affects Versions: 6.2.1 > Environment: Mac OS X 10.10 >Reporter: peina > Labels: chinese, tokenization > Attachments: lucene-7508-test.patch, lucene-7508.patch > > > If text length is > 1024, HMMChineseTokenizer failed to split sentences > correctly. > Test Sample: > public static void main(String[] args) throws IOException{ > Analyzer analyzer = new SmartChineseAnalyzer(); /* will load stopwords */ > //String sentence = > "“七八个物管工作人员对我一个文弱书生拳打脚踢,我极力躲避时还被追打。”前天,微信网友爆料称,一名50多岁的江西教师在昆明被物管群殴,手指骨折,向网友求助。教师为何会被物管殴打?事情的真相又是如何?昨天,记者来到圣世一品小区,通过调查了解,事情的起因源于这名教师在小区里帮女儿散发汗蒸馆广告单,被物管保安发现后,引发冲突。对于群殴教师的说法,该小区物管保安队长称:“保安在追的过程中,确实有拉扯,但并没有殴打教师,至于手指骨折是他自己摔伤的。”爆料江西教师在昆明被物管殴打记者注意到,消息于8月27日发出,爆料者称,自己是江西宜丰崇文中学的一名中年教师黄敏。暑假期间来昆明的女儿家度假。他女儿在昆明与人合伙开了一家汗蒸馆,7月30日开业。8月9日下午6点30分许,他到昆明东二环圣世一品小区为女儿的汗蒸馆散发宣传小广告。小区物管前来制止,他就停止发放行为。黄敏称,小区物管保安人员要求他收回散发出去的广告单,他就去收了。物管要求他到办公室里去接受处理,他也配合了。让他没有想到的是,在处理的过程中,七八个年轻的物管人员突然对他拳打脚踢,他极力躲避时还被追着打,而且这一切,是在小区物管领导的注视下发生的。黄敏说,被打后,他立即报了警。除身上多处软组织挫伤外,伤得最严重的是右手大拇指粉碎性骨折,一掌骨骨折。他到云南省第三人民医院住了7天院,医生说无法手术,只能用夹板固定,也不吃药,待其自然修复,至少要3个月以上,右手大拇指还有可能伤残。为证明自己的说法,黄敏还拿出了官渡区公安分局菊花派出所出具的伤情鉴定委托书。他的伤情被鉴定为轻伤二级。说法帮女儿发宣传小广告教师在小区里被殴打昨日,记者者拨通了黄敏的电话。他说,当时他看见该小区的大门没有关,也没有保安值班。于是,他就进到了小区里帮女儿的汗蒸馆发广告单。在楼栋值班的保安没有阻止的前提下,他乘电梯来到了楼上,为了不影响住户,他将名片放在了房门的把手上。被保安发现时,他才发了四五十张。保安问他干什么?他回答,家里开了汗蒸馆,来宣传一下。两名保安叫他不要发了,并要求他到物管办公室等待领导处理。交谈中,由于对方一直在说方言,黄敏只能听清楚的一句话是,物管叫他去收回小广告。他当即同意了,准备去收。这时,小区的七八名工作人员就殴打了他,其中有穿保安服装的,也有身着便衣的。让他气愤的是,他试图逃跑躲起来,依然被追着殴打。黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相。。"; > String sentence = > "“七八个物管工作人员对我一个文弱书生拳打脚踢,我极力躲避时还被追打。”前天,微信网友爆料称,一名50多岁的江西教师在昆明被物管群殴,手指骨折,向网友求助。教师为何会被物管殴打?事情的真相又是如何?昨天,记者来到圣世一品小区,通过调查了解,事情的起因源于这名教师在小区里帮女儿散发汗蒸馆广告单,被物管保安发现后,引发冲突。对于群殴教师的说法,该小区物管保安队长称:“保安在追的过程中,确实有拉扯,但并没有殴打教师,至于手指骨折是他自己摔伤的。”爆料江西教师在昆明被物管殴打记者注意到,消息于8月27日发出,爆料者称,自己是江西宜丰崇文中学的一名中年教师黄敏。暑假期间来昆明的女儿家度假。他女儿在昆明与人合伙开了一家汗蒸馆,7月30日开业。8月9日下午6点30分许,他到昆明东二环圣世一品小区为女儿的汗蒸馆散发宣传小广告。小区物管前来制止,他就停止发放行为。黄敏称,小区物管保安人员要求他收回散发出去的广告单,他就去收了。物管要求他到办公室里去接受处理,他也配合了。让他没有想到的是,在处理的过程中,七八个年轻的物管人员突然对他拳打脚踢,他极力躲避时还被追着打,而且这一切,是在小区物管领导的注视下发生的。黄敏说,被打后,他立即报了警。除身上多处软组织挫伤外,伤得最严重的是右手大拇指粉碎性骨折,一掌骨骨折。他到云南省第三人民医院住了7天院,医生说无法手术,只能用夹板固定,也不吃药,待其自然修复,至少要3个月以上,右手大拇指还有可能伤残。为证明自己的说法,黄敏还拿出了官渡区公安分局菊花派出所出具的伤情鉴定委托书。他的伤情被鉴定为轻伤二级。说法帮女儿发宣传小广告教师在小区里被殴打昨日,��者拨通了黄敏的电话。他说,当时他看见该小区的大门没有关,也没有保安值班。于是,他就进到了小区里帮女儿的汗蒸馆发广告单。在楼栋值班的保安没有阻止的前提下,他乘电梯来到了楼上,为了不影响住户,他将名片放在了房门的把手上。被保安发现时,他才发了四五十张。保安问他干什么?他回答,家里开了汗蒸馆,来宣传一下。两名保安叫他不要发了,并要求他到物管办公室等待领导处理。交谈中,由于对方一直在说方言,黄敏只能听清楚的一句话是,物管叫他去收回小广告。他当即同意了,准备去收。这时,小区的七八名工作人员就殴打了他,其中有穿保安服装的,也有身着便衣的。让他气愤的是,他试图逃跑躲起来,依然被追着殴打。黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相"; > System.out.println(sentence.length()); >// String sentence = "女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事件真相。"; > TokenStream tokens = analyzer.tokenStream("dummyfield", sentence); > tokens.reset(); > CharTermAttribute termAttr = (CharTermAttribute) > tokens.getAttribute(CharTermAttribute.class); > while (tokens.incrementToken()) { > // System.out.println(termAttr.toString()); > } > > analyzer.close(); > } > The text length in above sample is 1027, with this sample, the sentences are > like this: > . > Sentence:黄敏说,女儿将他被打又维权无门的遭遇发到了微信上,希望找到相关视频和照片,还原事 > Sentence:件真相 > The last 3 characters are detected as an individual sentence, so 还原事件真相 is > tokenized as 还原|事|件|真相. when the correct tokens should be 还原|事件|真相。 > Override isSafeEnd method in HMMChineseTokenizer fixes this issue by consider > ',' or '。' as a safe end of text: > public class HMMChineseTokenizer extends SegmentingTokenizerBase { > > /** For sentence tokenization, these are the unambiguous break positions. */ > protected boolean isSafeEnd(char ch) { > switch(ch) { > case 0x000D: > case 0x000A: > case 0x0085: > case 0x2028: > case 0x2029: >+ case '。': >+ case ',': > return true; > default: > return false; > } > } > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6989) Implement MMapDirectory unmapping for coming Java 9 changes
[ https://issues.apache.org/jira/browse/LUCENE-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753818#comment-15753818 ] Uwe Schindler commented on LUCENE-6989: --- I opened LUCENE-7595 to investigate. > Implement MMapDirectory unmapping for coming Java 9 changes > --- > > Key: LUCENE-6989 > URL: https://issues.apache.org/jira/browse/LUCENE-6989 > Project: Lucene - Core > Issue Type: Task > Components: core/store >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > Fix For: 6.0, 6.4 > > Attachments: LUCENE-6989-disable5x.patch, > LUCENE-6989-disable5x.patch, LUCENE-6989-fixbuild148.patch, > LUCENE-6989-v2.patch, LUCENE-6989-v3-post-b148.patch, LUCENE-6989.patch, > LUCENE-6989.patch, LUCENE-6989.patch, LUCENE-6989.patch > > > Originally, the sun.misc.Cleaner interface was declared as "critical API" in > [JEP 260|http://openjdk.java.net/jeps/260 ] > Unfortunately the decission was changed in favor of a oficially supported > {{java.lang.ref.Cleaner}} API. A side effect of this change is to move all > existing {{sun.misc.Cleaner}} APIs into a non-exported package. This causes > our forceful unmapping to no longer work, because we can get the cleaner > instance via reflection, but trying to invoke it will throw one of the new > Jigsaw RuntimeException because it is completely inaccessible. This will make > our forceful unmapping fail. There are also no changes in Garbage collector, > the problem still exists. > For more information see this [mailing list > thread|http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/thread.html#38243]. > This commit will likely be done, making our unmapping efforts no longer > working. Alan Bateman is aware of this issue and will open a new issue at > OpenJDK to allow forceful unmapping without using the now private > sun.misc.Cleaner. The idea is to let the internal class sun.misc.Cleaner > implement the Runable interface, so we can simply cast to runable and call > the run() method to unmap. The code would then work. This will lead to minor > changes in our unmapper in MMapDirectory: An instanceof check and casting if > possible. > I opened this issue to keep track and implement the changes as soon as > possible, so people will have working unmapping when java 9 comes out. > Current Lucene versions will no longer work with Java 9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7595) RAMUsageTester in test-framework and static field checker no longer works with Java 9
[ https://issues.apache.org/jira/browse/LUCENE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-7595: -- Description: Lucene/Solr tests have a special rule that records memory usage in static fields before and after test, so we can detect memory leaks. This check dives into JDK classes (like java.lang.String to detect their size). As Java 9 build 148 completely forbids setAccessible on any runtime class, we have to change or disable this check: - As first step I will only add the rule to LTC, if we not have Java 8 - As a second step we might investigate how to improve this [~rcmuir] had some ideas for the 2nd point: - Don't dive into classes from JDK modules and instead "estimate" the size for some special cases (like Strings) - Disallow any static field in tests that is not final (constant) and points to an Object except: Strings and native (wrapper) types. In addition we also have RAMUsageTester, that has similar problems and is used to compare estimations of Lucene's calculations of Codec/IndexWriter/IndexReader memory usage with reality. We should simply disable those tests. was: Lucene/Solr tests have a special rule that records memory usage in static fields before and after test, so we can detect memory leaks. This check dives into JDK classes (like java.lang.String to detect their size). As Java 9 build 148 completely forbids setAccessible on any runtime class, we have to change or disable this check: - As first step I will only add the rule to LTC, if we not have Java 8 - As a second step we might investigate how to improve this [~rcmuir] had some ideas for the 2nd point: - Don't dive into classes from JDK modules and instead "estimate" the size for some special cases (like Strings) - Disallow any static field in tests that is not final (constant) and points to an Object except: Strings and native (wrapper) types. > RAMUsageTester in test-framework and static field checker no longer works > with Java 9 > - > > Key: LUCENE-7595 > URL: https://issues.apache.org/jira/browse/LUCENE-7595 > Project: Lucene - Core > Issue Type: Bug > Components: general/test >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > > Lucene/Solr tests have a special rule that records memory usage in static > fields before and after test, so we can detect memory leaks. This check dives > into JDK classes (like java.lang.String to detect their size). As Java 9 > build 148 completely forbids setAccessible on any runtime class, we have to > change or disable this check: > - As first step I will only add the rule to LTC, if we not have Java 8 > - As a second step we might investigate how to improve this > [~rcmuir] had some ideas for the 2nd point: > - Don't dive into classes from JDK modules and instead "estimate" the size > for some special cases (like Strings) > - Disallow any static field in tests that is not final (constant) and points > to an Object except: Strings and native (wrapper) types. > In addition we also have RAMUsageTester, that has similar problems and is > used to compare estimations of Lucene's calculations of > Codec/IndexWriter/IndexReader memory usage with reality. We should simply > disable those tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7595) RAMUsageTester in test-framework and static field checker no longer works with Java 9
[ https://issues.apache.org/jira/browse/LUCENE-7595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-7595: -- Labels: Java9 (was: ) > RAMUsageTester in test-framework and static field checker no longer works > with Java 9 > - > > Key: LUCENE-7595 > URL: https://issues.apache.org/jira/browse/LUCENE-7595 > Project: Lucene - Core > Issue Type: Bug > Components: general/test >Reporter: Uwe Schindler >Assignee: Uwe Schindler > Labels: Java9 > > Lucene/Solr tests have a special rule that records memory usage in static > fields before and after test, so we can detect memory leaks. This check dives > into JDK classes (like java.lang.String to detect their size). As Java 9 > build 148 completely forbids setAccessible on any runtime class, we have to > change or disable this check: > - As first step I will only add the rule to LTC, if we not have Java 8 > - As a second step we might investigate how to improve this > [~rcmuir] had some ideas for the 2nd point: > - Don't dive into classes from JDK modules and instead "estimate" the size > for some special cases (like Strings) > - Disallow any static field in tests that is not final (constant) and points > to an Object except: Strings and native (wrapper) types. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7595) RAMUsageTester in test-framework and static field checker no longer works with Java 9
Uwe Schindler created LUCENE-7595: - Summary: RAMUsageTester in test-framework and static field checker no longer works with Java 9 Key: LUCENE-7595 URL: https://issues.apache.org/jira/browse/LUCENE-7595 Project: Lucene - Core Issue Type: Bug Components: general/test Reporter: Uwe Schindler Assignee: Uwe Schindler Lucene/Solr tests have a special rule that records memory usage in static fields before and after test, so we can detect memory leaks. This check dives into JDK classes (like java.lang.String to detect their size). As Java 9 build 148 completely forbids setAccessible on any runtime class, we have to change or disable this check: - As first step I will only add the rule to LTC, if we not have Java 8 - As a second step we might investigate how to improve this [~rcmuir] had some ideas for the 2nd point: - Don't dive into classes from JDK modules and instead "estimate" the size for some special cases (like Strings) - Disallow any static field in tests that is not final (constant) and points to an Object except: Strings and native (wrapper) types. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org