[jira] [Commented] (SOLR-7955) Auto create .system collection on first request if it does not exist
[ https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849597#comment-15849597 ] ASF subversion and git services commented on SOLR-7955: --- Commit 3e44928f490c34666107e9bd6393020be160865f in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3e44928 ] SOLR-7955: making it easy to commit to branch_6x > Auto create .system collection on first request if it does not exist > > > Key: SOLR-7955 > URL: https://issues.apache.org/jira/browse/SOLR-7955 > Project: Solr > Issue Type: Improvement >Reporter: Jan Høydahl >Assignee: Noble Paul > Attachments: SOLR-7955.patch, SOLR-7955.patch > > > Why should a user need to create the {{.system}} collection manually? It > would simplify instructions related to BLOB store if user could assume it is > always there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.4-Linux (64bit/jdk-9-ea+153) - Build # 92 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/92/ Java: 64bit/jdk-9-ea+153 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([FEC3EB58FB8AA63B:3BD52FC3EB3C9E5B]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 1670 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-6.4-Linux/lucene/build/core/test/temp/junit4-J2-20170202_062435_143.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+153) - Build # 2782 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2782/ Java: 32bit/jdk-9-ea+153 -server -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: Expected 2 of 3 replicas to be active but only found 1; [core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:33963","node_name":"127.0.0.1:33963_","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/34)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"c8n_1x3_lf_shard1_replica1", "base_url":"http://127.0.0.1:33079;, "node_name":"127.0.0.1:33079_", "state":"down"}, "core_node2":{ "state":"down", "base_url":"http://127.0.0.1:32962;, "core":"c8n_1x3_lf_shard1_replica2", "node_name":"127.0.0.1:32962_"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:33963;, "node_name":"127.0.0.1:33963_", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} Stack Trace: java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 1; [core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:33963","node_name":"127.0.0.1:33963_","state":"active","leader":"true"}]; clusterState: DocCollection(c8n_1x3_lf//clusterstate.json/34)={ "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node1":{ "core":"c8n_1x3_lf_shard1_replica1", "base_url":"http://127.0.0.1:33079;, "node_name":"127.0.0.1:33079_", "state":"down"}, "core_node2":{ "state":"down", "base_url":"http://127.0.0.1:32962;, "core":"c8n_1x3_lf_shard1_replica2", "node_name":"127.0.0.1:32962_"}, "core_node3":{ "core":"c8n_1x3_lf_shard1_replica3", "base_url":"http://127.0.0.1:33963;, "node_name":"127.0.0.1:33963_", "state":"active", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false"} at __randomizedtesting.SeedInfo.seed([BAAC66F7AAB169E2:32F8592D044D041A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168) at org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1112 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1112/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: org.apache.solr.core.TestLazyCores.testNoCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([EBC9B0A6EFBEE1A3:34A9117724998206]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:857) at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:794) at org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:776) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound='10'] xml response was: 0 0 *:* request was:q=*:* at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:850) ... 41 more FAILED:
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+153) - Build # 18889 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18889/ Java: 32bit/jdk-9-ea+153 -client -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.handler.admin.TestApiFramework.testFramework Error Message: Stack Trace: java.lang.ExceptionInInitializerError at __randomizedtesting.SeedInfo.seed([70B7FB1B273DF1D2:67C1313C21E91DEF]:0) at net.sf.cglib.core.KeyFactory$Generator.generateClass(KeyFactory.java:166) at net.sf.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25) at net.sf.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:216) at net.sf.cglib.core.KeyFactory$Generator.create(KeyFactory.java:144) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:116) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:108) at net.sf.cglib.core.KeyFactory.create(KeyFactory.java:104) at net.sf.cglib.proxy.Enhancer.(Enhancer.java:69) at org.easymock.internal.ClassProxyFactory.createEnhancer(ClassProxyFactory.java:259) at org.easymock.internal.ClassProxyFactory.createProxy(ClassProxyFactory.java:174) at org.easymock.internal.MocksControl.createMock(MocksControl.java:60) at org.easymock.EasyMock.createMock(EasyMock.java:104) at org.apache.solr.handler.admin.TestCoreAdminApis.getCoreContainerMock(TestCoreAdminApis.java:76) at org.apache.solr.handler.admin.TestApiFramework.testFramework(TestApiFramework.java:59) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Commented] (SOLR-7955) Auto create .system collection on first request if it does not exist
[ https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849435#comment-15849435 ] ASF subversion and git services commented on SOLR-7955: --- Commit e200b8a2a418cdb145acb51d1181b1b60362a926 in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e200b8a ] SOLR-7955: Auto create .system collection on first request if it does not exist > Auto create .system collection on first request if it does not exist > > > Key: SOLR-7955 > URL: https://issues.apache.org/jira/browse/SOLR-7955 > Project: Solr > Issue Type: Improvement >Reporter: Jan Høydahl >Assignee: Noble Paul > Attachments: SOLR-7955.patch, SOLR-7955.patch > > > Why should a user need to create the {{.system}} collection manually? It > would simplify instructions related to BLOB store if user could assume it is > always there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849392#comment-15849392 ] Joel Bernstein commented on SOLR-8593: -- Yes, the LogicalSort appears to be created because the SolrSortRule.convert does get called. But the resulting query plan doesn't include the SolrSort. It does appear that planner chooses not to include it. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849344#comment-15849344 ] Julian Hyde commented on SOLR-8593: --- In the case where there is LIMIT but no ORDER BY, is a LogicalSort created? (There should be.) Is a SolrSort created, and its its offset field set (there should be)? If so, how/why does the SolrSort get dropped? (Does the planner find that it is equivalent to something cheaper? It shouldn't.) > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.4-Linux (32bit/jdk1.8.0_121) - Build # 91 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/91/ Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([E201B7308FDBA6D2:271773AB9F6D9EB2]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12041 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.4-Linux/solr/build/solr-core/test/J0/temp/solr.handler.admin.MBeansHandlerTest_E201B7308FDBA6D2-001/init-core-data-001 [junit4] 2>
[JENKINS] Lucene-Solr-NightlyTests-6.4 - Build # 14 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/14/ 4 tests failed. FAILED: org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew Error Message: expected:<200> but was:<403> Stack Trace: java.lang.AssertionError: expected:<200> but was:<403> at __randomizedtesting.SeedInfo.seed([88455E6C1D83F875:BFDEAA72254F25D1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:131) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:316) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (SOLR-10011) Refactor PointField & TrieField to share common code
[ https://issues.apache.org/jira/browse/SOLR-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated SOLR-10011: - Attachment: SOLR-10011.patch Sorry for the delay, here is the patch. I replaced all the uses of {{FieldType.getNumericType()}} with {{FieldType.getNumberType()}}. Note that since NumberType has a DATE value, TrieDateField.getNumberType() returns DATE (instead of LONG, like TrieDateField.getNumericType()). Let me know what you think [~ichattopadhyaya] > Refactor PointField & TrieField to share common code > > > Key: SOLR-10011 > URL: https://issues.apache.org/jira/browse/SOLR-10011 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: SOLR-10011.patch, SOLR-10011.patch, SOLR-10011.patch, > SOLR-10011.patch, SOLR-10011.patch > > > We should eliminate PointTypes and TrieTypes enum to have a common enum for > both. That would enable us to share a lot of code between the two field types. > In the process, fix the bug: > PointFields with indexed=false, docValues=true seem to be using > (Int|Double|Float|Long)Point.newRangeQuery() for performing exact matches and > range queries. However, they should instead be using DocValues based range > query. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7673) Add MultiValued[Int/Long/Float/Double]FieldSource for SortedNumericDocValues
[ https://issues.apache.org/jira/browse/LUCENE-7673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated LUCENE-7673: -- Attachment: LUCENE-7673.patch > Add MultiValued[Int/Long/Float/Double]FieldSource for SortedNumericDocValues > > > Key: LUCENE-7673 > URL: https://issues.apache.org/jira/browse/LUCENE-7673 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Tomás Fernández Löbbe >Assignee: Tomás Fernández Löbbe > Attachments: LUCENE-7673.patch > > > Right now \[Int/Long/Float/Double\]FieldSource can give a {{ValueSource}} > view of a {{NumericDocValues}} field. This Jira is to add > MultiValued\[Int/Long/Float/Double\]FieldSource that given a > {{SortedNumericSelector.Type}} can give a {{ValueSource}} view of a > {{SortedNumericDocValues}} field > I considered instead of adding new classes an optional selector parameter to > the existing \[Int/Long/Float/Double\]FieldSource, but I think adding > different classes makes a cleaner API and it’s clear that for MultiValued* > case, the selector is a mandatory parameter. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3812 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3812/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=15420, name=searcherExecutor-6277-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=15420, name=searcherExecutor-6277-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([98E153C0F551FBC3]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=15420, name=searcherExecutor-6277-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=15420, name=searcherExecutor-6277-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([98E153C0F551FBC3]:0) FAILED: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test Error Message: Could not find collection:collection2 Stack Trace: java.lang.AssertionError: Could not find collection:collection2 at __randomizedtesting.SeedInfo.seed([98E153C0F551FBC3:10B56C1A5BAD963B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:159) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:144) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:139) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:856) at
[jira] [Created] (LUCENE-7673) Add MultiValued[Int/Long/Float/Double]FieldSource for SortedNumericDocValues
Tomás Fernández Löbbe created LUCENE-7673: - Summary: Add MultiValued[Int/Long/Float/Double]FieldSource for SortedNumericDocValues Key: LUCENE-7673 URL: https://issues.apache.org/jira/browse/LUCENE-7673 Project: Lucene - Core Issue Type: Improvement Reporter: Tomás Fernández Löbbe Assignee: Tomás Fernández Löbbe Right now \[Int/Long/Float/Double\]FieldSource can give a {{ValueSource}} view of a {{NumericDocValues}} field. This Jira is to add MultiValued\[Int/Long/Float/Double\]FieldSource that given a {{SortedNumericSelector.Type}} can give a {{ValueSource}} view of a {{SortedNumericDocValues}} field I considered instead of adding new classes an optional selector parameter to the existing \[Int/Long/Float/Double\]FieldSource, but I think adding different classes makes a cleaner API and it’s clear that for MultiValued* case, the selector is a mandatory parameter. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_121) - Build # 2781 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2781/ Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([A284E32B9CBC9FA2:679227B08C0AA7C2]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12137 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.admin.MBeansHandlerTest_A284E32B9CBC9FA2-001/init-core-data-001
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849228#comment-15849228 ] Dennis Gove commented on SOLR-10087: I think this is a good addition. > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849224#comment-15849224 ] Noble Paul commented on SOLR-10087: --- Totally agree. Th is ticket should limit the scope to the description and we should close this > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849218#comment-15849218 ] Kevin Risden commented on SOLR-10087: - I think the current steps aren't that bad. We can debate changing the process in a different JIRA. This was opened as a one line change that enables jars to be loaded. Without the change jars can't be loaded at all from the blob store. > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10086) Add Streaming Expression for Kafka Streams
[ https://issues.apache.org/jira/browse/SOLR-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849209#comment-15849209 ] Kevin Risden commented on SOLR-10086: - Here is an example of what I was referring to using the blob store and registering a custom Kafka streaming expression. https://github.com/risdenk/solr_custom_streaming_expressions/tree/kafka The streaming expression is at least a start. It hard codes the configs for Kafka but those would be easy to add the customization in the streaming expression. > Add Streaming Expression for Kafka Streams > -- > > Key: SOLR-10086 > URL: https://issues.apache.org/jira/browse/SOLR-10086 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Susheel Kumar >Priority: Minor > > This is being asked to have SolrCloud pull data from Kafka topic periodically > using DataImport Handler. > Adding streaming expression support to pull data from Kafka would be good > feature to have. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+153) - Build # 18888 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/1/ Java: 32bit/jdk-9-ea+153 -client -XX:+UseParallelGC 3 tests failed. FAILED: org.apache.solr.handler.TestReqParamsAPI.test Error Message: Could not get expected value 'A val' for path 'response/params/x/a' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{"znodeVersion":-1}}, from server: http://127.0.0.1:33174/solr/collection1_shard1_replica2 Stack Trace: java.lang.AssertionError: Could not get expected value 'A val' for path 'response/params/x/a' full output: { "responseHeader":{ "status":0, "QTime":0}, "response":{"znodeVersion":-1}}, from server: http://127.0.0.1:33174/solr/collection1_shard1_replica2 at __randomizedtesting.SeedInfo.seed([772B2293F725D94E:FF7F1D4959D9B4B6]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:556) at org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:110) at org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:69) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:543) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[jira] [Updated] (SOLR-7955) Auto create .system collection on first request if it does not exist
[ https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-7955: - Attachment: SOLR-7955.patch I'm planning to commit this as soon as I add a test > Auto create .system collection on first request if it does not exist > > > Key: SOLR-7955 > URL: https://issues.apache.org/jira/browse/SOLR-7955 > Project: Solr > Issue Type: Improvement >Reporter: Jan Høydahl >Assignee: Noble Paul > Attachments: SOLR-7955.patch, SOLR-7955.patch > > > Why should a user need to create the {{.system}} collection manually? It > would simplify instructions related to BLOB store if user could assume it is > always there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849160#comment-15849160 ] Joel Bernstein edited comment on SOLR-8593 at 2/2/17 12:16 AM: --- Hi, [~julianhyde], The exact problem I'm seeing is that the SolrSort is not included in the query plan unless an ORDER BY is used in the query. With the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrSort org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan Without the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan was (Author: joel.bernstein): Hi, [~julianhyde], The exact problem I'm seeing is that the SolrSort is not included in the query plan unless an ORDER BY is used in the query. With the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrSort org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan Without the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849160#comment-15849160 ] Joel Bernstein edited comment on SOLR-8593 at 2/2/17 12:17 AM: --- Hi [~julianhyde], The exact problem I'm seeing is that the SolrSort is not included in the query plan unless an ORDER BY is used in the query. With the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrSort org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan Without the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan was (Author: joel.bernstein): Hi, [~julianhyde], The exact problem I'm seeing is that the SolrSort is not included in the query plan unless an ORDER BY is used in the query. With the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrSort org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan Without the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849160#comment-15849160 ] Joel Bernstein commented on SOLR-8593: -- Hi, [~julianhyde], The exact problem I'm seeing is that the SolrSort is not included in the query plan unless an ORDER BY is used in the query. With the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrSort org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan Without the ORDER BY our tree looks like this: org.apache.solr.handler.sql.SolrProject org.apache.solr.handler.sql.SolrTableScan > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849154#comment-15849154 ] Joel Bernstein commented on SOLR-8593: -- I've been reviewing CALCITE-1235 which deals with pushing down the LIMIT in the Cassandra adapter. The problem that is reported is: bq. To clarify, in the currently committed version of the adapter, pushing down sort + limit does not work because CassandraSortRule only ever seems to see a Sort without a fetch. I'm seeing a slightly different problem which is that the SolrSort isn't in the logical parse tree if the ORDER BY clause is removed. Without the SolrSort we don't have a way of getting the limit. The way this is handled in CALCITE-1235 is to create a rule that fires on EnumerableLimit. It seems to have worked as the ticket is committed. If we don't see another approach we'll have to go that route as well. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849145#comment-15849145 ] Julian Hyde edited comment on SOLR-8593 at 2/2/17 12:05 AM: Not sure I understand. The query {{select a from b limit 10}} will have a {{Sort}} whose key has zero fields but which has fetch = 10. The {{Sort}} will be translated to a {{SolrSort}} with similar attributes. The sort is trivial - that is, you don't need to do any work to sort on 0 fields - but you do need to apply the limit. If you see a {{SolrSort}} with empty keys, don't drop it, but maybe convert into a {{SolrLimit}} if you have such a thing. You may be wondering why we combine sort and limit into the same operator. But remember that relational data sets are inherently unordered, so we have to do them at the same time. Sort with an empty key has reasonable semantics, just as -- I hope you agree -- Aggregate with an empty key (e.g. {{select count\(\*\) from emp}}, which is equivalent to {{select count\(\*\) from emp group by ()}}) is a reasonable generalization of Aggregate. was (Author: julianhyde): Not sure I understand. The query {{select a from b limit 10}} will have a {{Sort}} whose key has zero fields but which has fetch = 10. The {{Sort}} will be translated to a {{SolrSort}} with similar attributes. The sort is trivial - that is, you don't need to do any work to sort on 0 fields - but you do need to apply the limit. If you see a {{SolrSort}} with empty keys, don't drop it, but maybe convert into a {{SolrLimit}} if you have such a thing. You may be wondering why we combine sort and limit into the same operator. But remember that relational data sets are inherently unordered, so we have to do them at the same time. Sort with an empty key has reasonable semantics, just as -- I hope you agree -- Aggregate with an empty key (e.g. {{select count(*) from emp}}, which is equivalent to {{select count(*) from emp group by ()}}) is a reasonable generalization of Aggregate. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849145#comment-15849145 ] Julian Hyde commented on SOLR-8593: --- Not sure I understand. The query {{select a from b limit 10}} will have a {{Sort}} whose key has zero fields but which has fetch = 10. The {{Sort}} will be translated to a {{SolrSort}} with similar attributes. The sort is trivial - that is, you don't need to do any work to sort on 0 fields - but you do need to apply the limit. If you see a {{SolrSort}} with empty keys, don't drop it, but maybe convert into a {{SolrLimit}} if you have such a thing. You may be wondering why we combine sort and limit into the same operator. But remember that relational data sets are inherently unordered, so we have to do them at the same time. Sort with an empty key has reasonable semantics, just as -- I hope you agree -- Aggregate with an empty key (e.g. {{select count(*) from emp}}, which is equivalent to {{select count(*) from emp group by ()}}) is a reasonable generalization of Aggregate. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849080#comment-15849080 ] Noble Paul commented on SOLR-10087: --- Thanks [~risdenk] I guess we should cut down he steps more . I plan to implement SOLR-7955 so that one step to create the {{..system}} collection is gone. Can we have a well known package such as {{solr.streamingexp.customexpr}} automatically map to an expressible called {{customexpr}}. So, user can effectively eliminate that step as well. > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from SOLR-9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7955) Auto create .system collection on first request if it does not exist
[ https://issues.apache.org/jira/browse/SOLR-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-7955: Assignee: Noble Paul > Auto create .system collection on first request if it does not exist > > > Key: SOLR-7955 > URL: https://issues.apache.org/jira/browse/SOLR-7955 > Project: Solr > Issue Type: Improvement >Reporter: Jan Høydahl >Assignee: Noble Paul > Attachments: SOLR-7955.patch > > > Why should a user need to create the {{.system}} collection manually? It > would simplify instructions related to BLOB store if user could assume it is > always there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_121) - Build # 6377 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6377/ Java: 32bit/jdk1.8.0_121 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at https://127.0.0.1:51363/solr/awhollynewcollection_0_shard1_replica1: ClusterState says we are the leader (https://127.0.0.1:51363/solr/awhollynewcollection_0_shard1_replica1), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:51363/solr/awhollynewcollection_0_shard1_replica1: ClusterState says we are the leader (https://127.0.0.1:51363/solr/awhollynewcollection_0_shard1_replica1), but locally we don't think so. Request came from null at __randomizedtesting.SeedInfo.seed([225D523D7C7E1B30:6A2826897A4D34A5]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:803) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1238) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1109) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:516) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Updated] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-10087: Description: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions Steps: {code} # Inspired by https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode # Start Solr with enabling Blob Store ./bin/solr start -c -f -a "-Denable.runtime.lib=true" # Create test collection ./bin/solr create -c test # Create .system collection curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' # Build custom streaming expression jar (cd custom-streaming-expression && mvn clean package) # Upload jar to .system collection using Blob Store API (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar 'http://localhost:8983/solr/.system/blob/test' # List all blobs that are stored curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' # Add the jar to the runtime lib curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "add-runtimelib": { "name":"test", "version":1 } }' # Create custom streaming expression using work from SOLR-9103 # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib jar curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "create-expressible": { "name": "customstreamingexpression", "class": "com.test.solr.CustomStreamingExpression", "runtimeLib": true } }' # Test the custom streaming expression curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' {code} was: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions Steps: {code} # Inspired by https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode # Start Solr with enabling Blob Store ./bin/solr start -c -f -a "-Denable.runtime.lib=true" # Create test collection ./bin/solr create -c test # Create .system collection curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' # Build custom streaming expression jar (cd custom-streaming-expression && mvn clean package) # Upload jar to .system collection using Blob Store API (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar 'http://localhost:8983/solr/.system/blob/test' # List all blobs that are stored curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' # Add the jar to the runtime lib curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "add-runtimelib": { "name":"test", "version":1 } }' # Create custom streaming expression using work from 9103 # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib jar curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "create-expressible": { "name": "customstreamingexpression", "class": "com.test.solr.CustomStreamingExpression", "runtimeLib": true } }' # Test the custom streaming expression curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' {code} > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store >
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15849050#comment-15849050 ] Kevin Risden commented on SOLR-10087: - [~noble.paul] Added in the description how to register a custom streaming expression with a runtime jar. > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # Build custom streaming expression jar > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from 9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-10087: Description: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions Steps: {code} # Inspired by https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode # Start Solr with enabling Blob Store ./bin/solr start -c -f -a "-Denable.runtime.lib=true" # Create test collection ./bin/solr create -c test # Create .system collection curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' # Build custom streaming expression jar (cd custom-streaming-expression && mvn clean package) # Upload jar to .system collection using Blob Store API (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar 'http://localhost:8983/solr/.system/blob/test' # List all blobs that are stored curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' # Add the jar to the runtime lib curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "add-runtimelib": { "name":"test", "version":1 } }' # Create custom streaming expression using work from 9103 # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib jar curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "create-expressible": { "name": "customstreamingexpression", "class": "com.test.solr.CustomStreamingExpression", "runtimeLib": true } }' # Test the custom streaming expression curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' {code} was: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions Steps: {code} ./bin/solr start -c -f -a "-Denable.runtime.lib=true" # Create test collection ./bin/solr create -c test # Create .system collection curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' # build custom streaming expression (cd custom-streaming-expression && mvn clean package) # Upload jar to .system collection using Blob Store API (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar 'http://localhost:8983/solr/.system/blob/test' # List all blobs that are stored curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' # Add the jar to the runtime lib curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "add-runtimelib": { "name":"test", "version":1 } }' # Create custom streaming expression using work from 9103 # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib jar curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "create-expressible": { "name": "customstreamingexpression", "class": "com.test.solr.CustomStreamingExpression", "runtimeLib": true } }' # Test the custom streaming expression curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' {code} > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > # Inspired by > https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > # Start Solr with enabling Blob Store > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl
[jira] [Updated] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-10087: Description: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions Steps: {code} ./bin/solr start -c -f -a "-Denable.runtime.lib=true" # Create test collection ./bin/solr create -c test # Create .system collection curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' # build custom streaming expression (cd custom-streaming-expression && mvn clean package) # Upload jar to .system collection using Blob Store API (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar 'http://localhost:8983/solr/.system/blob/test' # List all blobs that are stored curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' # Add the jar to the runtime lib curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "add-runtimelib": { "name":"test", "version":1 } }' # Create custom streaming expression using work from 9103 # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib jar curl 'http://localhost:8983/solr/test/config' -H 'Content-type:application/json' -d '{ "create-expressible": { "name": "customstreamingexpression", "class": "com.test.solr.CustomStreamingExpression", "runtimeLib": true } }' # Test the custom streaming expression curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' {code} was: StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions > Steps: > {code} > ./bin/solr start -c -f -a "-Denable.runtime.lib=true" > # Create test collection > ./bin/solr create -c test > # Create .system collection > curl 'http://localhost:8983/solr/admin/collections?action=CREATE=.system' > # build custom streaming expression > (cd custom-streaming-expression && mvn clean package) > # Upload jar to .system collection using Blob Store API > (https://cwiki.apache.org/confluence/display/solr/Blob+Store+API) > curl -X POST -H 'Content-Type: application/octet-stream' --data-binary > @custom-streaming-expression/target/custom-streaming-expression-1.0-SNAPSHOT.jar > 'http://localhost:8983/solr/.system/blob/test' > # List all blobs that are stored > curl 'http://localhost:8983/solr/.system/blob?omitHeader=true' > # Add the jar to the runtime lib > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ >"add-runtimelib": { "name":"test", "version":1 } > }' > # Create custom streaming expression using work from 9103 > # Patch from SOLR-10087 is required for StreamHandler to load the runtimeLib > jar > curl 'http://localhost:8983/solr/test/config' -H > 'Content-type:application/json' -d '{ > "create-expressible": { > "name": "customstreamingexpression", > "class": "com.test.solr.CustomStreamingExpression", > "runtimeLib": true > } > }' > # Test the custom streaming expression > curl 'http://localhost:8983/solr/test/stream?expr=customstreamingexpression()' > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] Release Lucene/Solr 6.4.1 RC1
https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2780/ Looks like the same failure is happening on Jenkins. Kevin Risden On Wed, Feb 1, 2017 at 4:27 PM, Dawid Weisswrote: > This test failed for me during the first smoketester run: > >[junit4] FAILURE 0.05s J1 | MBeansHandlerTest.testDiff <<< >[junit4]> Throwable #1: org.junit.ComparisonFailure: > expected: but was: Delta: 1> >[junit4]>at > __randomizedtesting.SeedInfo.seed([8B5954CB0E2B8501:4E4F90501E9DBD61]:0) >[junit4]>at > org.apache.solr.handler.admin.MBeansHandlerTest.testDiff( > MBeansHandlerTest.java:63) > > Repro line (didn't try): ant test -Dtestcase=MBeansHandlerTest > -Dtests.method=testDiff -Dtest > s.seed=8B5954CB0E2B8501 -Dtests.locale=th -Dtests.timezone=US/Arizona > -Dtests.asserts=true -Dtests.file.enco > ding=UTF-8 > > The second time it was ok, so could be something intermittent. > > SUCCESS! [0:58:03.206234] > > Dawid > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_121) - Build # 2780 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2780/ Java: 32bit/jdk1.8.0_121 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([F6E5347C2400D2FB:33F3F0E734B6EA9B]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11335 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.admin.MBeansHandlerTest_F6E5347C2400D2FB-001/init-core-data-001 [junit4] 2> 498864
[jira] [Comment Edited] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848959#comment-15848959 ] Noble Paul edited comment on SOLR-10087 at 2/1/17 9:30 PM: --- I believe all components should be dynamically loadable. [~risdenk] , please explain (in description) how users can register a streaming expression plugin and use it if the class is loaded from a runtime jar [~dpgove] We have taken extreme caution in protecting the system while using libraries posted from outside 1) user has to enable it explicitly from command line 2) user can optionally choose to enforce that the binaries are signed by a certain private key was (Author: noble.paul): I believe all components should be dynamically loadable. [~risdenk] , please explain (in description) how users can register a streaming expression plugin and use it [~dpgove] We have taken extreme caution in protecting the system while using libraries posted from outside 1) user has to enable it explicitly from command line 2) user can optionally choose to enforce that the binaries are signed by a certain private key > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848959#comment-15848959 ] Noble Paul commented on SOLR-10087: --- I believe all components should be dynamically loadable. [~risdenk] , please explain (in description) how users can register a streaming expression plugin and use it [~dpgove] We have taken extreme caution in protecting the system while using libraries posted from outside 1) user has to enable it explicitly from command line 2) user can optionally choose to enforce that the binaries are signed by a certain private key > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [VOTE] Release Lucene/Solr 6.4.1 RC1
This test failed for me during the first smoketester run: [junit4] FAILURE 0.05s J1 | MBeansHandlerTest.testDiff <<< [junit4]> Throwable #1: org.junit.ComparisonFailure: expected: but was: [junit4]>at __randomizedtesting.SeedInfo.seed([8B5954CB0E2B8501:4E4F90501E9DBD61]:0) [junit4]>at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) Repro line (didn't try): ant test -Dtestcase=MBeansHandlerTest -Dtests.method=testDiff -Dtest s.seed=8B5954CB0E2B8501 -Dtests.locale=th -Dtests.timezone=US/Arizona -Dtests.asserts=true -Dtests.file.enco ding=UTF-8 The second time it was ok, so could be something intermittent. SUCCESS! [0:58:03.206234] Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 648 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/648/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor159.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:737) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:799) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1049) at org.apache.solr.core.SolrCore.(SolrCore.java:914) at org.apache.solr.core.SolrCore.(SolrCore.java:807) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:904) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:555) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor159.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:737) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:799) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1049) at org.apache.solr.core.SolrCore.(SolrCore.java:914) at org.apache.solr.core.SolrCore.(SolrCore.java:807) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:904) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:555) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([A2547973D35AAD87]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269) at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848924#comment-15848924 ] Kevin Risden commented on SOLR-10087: - The ability to add binaries is protected by a system property and has to be explicitly enabled. Additionally, the jars can be signed if required. https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848918#comment-15848918 ] Dennis Gove commented on SOLR-10087: I'm hesitant about the ability to post binaries into Solr - I feel like it's a huge security concern. That said, this feature isn't adding the ability to post binaries, just the ability to make use of posted binaries in streaming. If the ability to post binaries is something that is already considered safe and/or the benefits outweigh the drawbacks, then I don't see any reason why streaming shouldn't support it. > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10086) Add Streaming Expression for Kafka Streams
[ https://issues.apache.org/jira/browse/SOLR-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848909#comment-15848909 ] Kevin Risden commented on SOLR-10086: - So I'm not 100% sure that adding the Kafka dependencies to Solr would be the right approach. I was chatting with [~joel.bernstein] earlier and was wondering if its possible to load custom streaming expressions. They can be registered (see SOLR-9103) in solrconfig.xml as long as the jar is on the classpath. Another option that I just tried out was registering jars via the blob store and then registering the custom streaming expression. I put together an example of this here: https://github.com/risdenk/solr_custom_streaming_expressions This requires a change to the StreamHandler class currently. See SOLR-10087 > Add Streaming Expression for Kafka Streams > -- > > Key: SOLR-10086 > URL: https://issues.apache.org/jira/browse/SOLR-10086 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Susheel Kumar >Priority: Minor > > This is being asked to have SolrCloud pull data from Kafka topic periodically > using DataImport Handler. > Adding streaming expression support to pull data from Kafka would be good > feature to have. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848908#comment-15848908 ] Kevin Risden commented on SOLR-10087: - [~noble.paul] [~caomanhdat] [~dpgove] [~joel.bernstein] - Thoughts on this? > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
[ https://issues.apache.org/jira/browse/SOLR-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-10087: Attachment: SOLR-10087.patch > StreamHandler should be able to use runtimeLib jars > --- > > Key: SOLR-10087 > URL: https://issues.apache.org/jira/browse/SOLR-10087 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Kevin Risden >Assignee: Kevin Risden >Priority: Minor > Attachments: SOLR-10087.patch > > > StreamHandler currently can't uses jars that via the runtimeLib and Blob > Store api. This is because the StreamHandler uses core.getResourceLoader() > instead of core.getMemClassLoader() for loading classes. > An example of this working with the fix is here: > https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10087) StreamHandler should be able to use runtimeLib jars
Kevin Risden created SOLR-10087: --- Summary: StreamHandler should be able to use runtimeLib jars Key: SOLR-10087 URL: https://issues.apache.org/jira/browse/SOLR-10087 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Kevin Risden Assignee: Kevin Risden Priority: Minor StreamHandler currently can't uses jars that via the runtimeLib and Blob Store api. This is because the StreamHandler uses core.getResourceLoader() instead of core.getMemClassLoader() for loading classes. An example of this working with the fix is here: https://github.com/risdenk/solr_custom_streaming_expressions -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848856#comment-15848856 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 8:16 PM: -- We've run into a bug with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. The issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? Here is the code where create the rule: https://github.com/apache/lucene-solr/blob/jira/solr-8593/solr/core/src/java/org/apache/solr/handler/sql/SolrRules.java#L184 was (Author: joel.bernstein): We've run into a bug with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. The issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848856#comment-15848856 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 7:51 PM: -- We've run into a bug with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. The issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? was (Author: joel.bernstein): We've run into a bug with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848856#comment-15848856 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 7:50 PM: -- We've run into a bug iwith pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? was (Author: joel.bernstein): We've run into a bug in with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848856#comment-15848856 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 7:50 PM: -- We've run into a bug with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? was (Author: joel.bernstein): We've run into a bug iwith pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848856#comment-15848856 ] Joel Bernstein commented on SOLR-8593: -- We've run into a bug in with pushing down of the SQL LIMIT clause. Currently we have code that handles the limit in the SolrSort class which extends org.apache.calcite.rel.core.Sort. This issue is that the SolrSort rule is only executed if an ORDER BY clause is included. So if LIMIT is used without an ORDER BY our code does not see the LIMIT. So limit works in this scenario: *select a from b order by a limit 10* but not this scenario: *select a from b limit 10* [~julianhyde], any ideas on how to resolve this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10077) TestManagedFeatureStore extends LuceneTestCase, but has no tests and just hosts a static method.
[ https://issues.apache.org/jira/browse/SOLR-10077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-10077: --- Attachment: SOLR-10077.patch Attached patch proposes to merge TestFeatureStore and TestFeatureLtrScoringModel into TestManagedFeatureStore. Extra context info as to the reason for the ticket here: * the TestManagedFeatureStore.java file name follows the TestMyClass.java or MyClassTest.java naming convention (and the class extends LuceneTestCase) * running {{ant test -Dtestcase=TestManagedFeatureStore}} however fails since there are no tests: bq. BUILD FAILED ... Not even a single test was executed (a typo in the filter pattern maybe?). ... > TestManagedFeatureStore extends LuceneTestCase, but has no tests and just > hosts a static method. > > > Key: SOLR-10077 > URL: https://issues.apache.org/jira/browse/SOLR-10077 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-10077.patch > > > We should probably just put this static method somewhere else? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1111 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris// Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=8143, name=jetty-launcher-1642-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 2) Thread[id=8145, name=jetty-launcher-1642-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=8143, name=jetty-launcher-1642-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[JENKINS] Lucene-Solr-SmokeRelease-6.4 - Build # 12 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.4/12/ No tests ran. Build Log: [...truncated 42102 lines...] ERROR: Connection was broken: java.io.IOException: Unexpected EOF at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:99) at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59) Build step 'Invoke Ant' marked build as failure ERROR: lucene is offline; cannot locate JDK 1.8 (latest) Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ERROR: lucene is offline; cannot locate JDK 1.8 (latest) ERROR: lucene is offline; cannot locate JDK 1.8 (latest) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10023) Improve single unit test run time with ant.
[ https://issues.apache.org/jira/browse/SOLR-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848802#comment-15848802 ] Mark Miller commented on SOLR-10023: +1 on this target, +1 on new name. There is still a great little speed bump for short tests to omit compile even when dropping into the module and it would be great to not have to hack it. > Improve single unit test run time with ant. > --- > > Key: SOLR-10023 > URL: https://issues.apache.org/jira/browse/SOLR-10023 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller > Attachments: SOLR-10023-add-test-only-target.patch, > SOLR-10023-add-test-only-target.patch, stdout.tar.gz > > > It seems to take 2 minutes and 45 seconds to run a single test with the > latest build design and the test itself is only 4 seconds. I've noticed this > for a long time, and it seems because ant is running through a billion > targets first. > I haven't checked yet, so maybe it's a Solr specific issue? I'll check with > Lucene and move this issue if necessary. > There is hopefully something we can do to improve this though. At least we > should try and get some sharp minds to take first / second look. If I did not > use an IDE so much to run tests, this would drive me nuts. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index
[ https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848760#comment-15848760 ] David Smiley commented on SOLR-10038: - I'm glad to be of help Samur. Spatial4j is what parses the WKT string. Assuming JTS is used as well, it constructs a JTS MultiPolygon. However Spatial4j can natively handle collections of shapes; it's probably much faster. Set {{useJtsMulti}} to false and I bet this will improve in performance a lot (defined here https://locationtech.github.io/spatial4j/apidocs/org/locationtech/spatial4j/context/jts/JtsSpatialContextFactory.html ). > Spatial Intersect Very Slow For Large Polygon and Large Index > - > > Key: SOLR-10038 > URL: https://issues.apache.org/jira/browse/SOLR-10038 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Affects Versions: 6.4 > Environment: Linux Ubuntu + Solr 6.4.0 >Reporter: samur araujo >Assignee: David Smiley > Labels: spatialsearch > > Hi all, I have indexed the entire geonames points (lat/long) with JTS > enabled, and I am trying return all points (geonameids) within a certain > polygon (e.g. Netherlands country polygon). This query takes 3 minutes to > return only 10.000 points. I am using only solr intersect. no facets. no > extra fitering. > Is there any configuration that could slow down such a query to less than 300 > ms? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10086) Add Streaming Expression for Kafka Streams
Susheel Kumar created SOLR-10086: Summary: Add Streaming Expression for Kafka Streams Key: SOLR-10086 URL: https://issues.apache.org/jira/browse/SOLR-10086 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Reporter: Susheel Kumar Priority: Minor This is being asked to have SolrCloud pull data from Kafka topic periodically using DataImport Handler. Adding streaming expression support to pull data from Kafka would be good feature to have. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9963) Add Calcite Avatica handler to Solr
[ https://issues.apache.org/jira/browse/SOLR-9963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-9963: --- Attachment: SOLR-9963.patch > Add Calcite Avatica handler to Solr > --- > > Key: SOLR-9963 > URL: https://issues.apache.org/jira/browse/SOLR-9963 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Parallel SQL >Reporter: Kevin Risden >Assignee: Kevin Risden > Attachments: SOLR-9963.patch, SOLR-9963.patch, SOLR-9963.patch > > > Calcite Avatica has an http endpoint which allows Avatica drivers to connect > to the server. This can be wired in as a handler to Solr. This would allow > Solr to be used by any Avatica JDBC/ODBC driver. This depends on the Calcite > work from SOLR-8593. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-8593: --- Fix Version/s: master (7.0) 6.5 > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-8593: --- Component/s: Parallel SQL > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement > Components: Parallel SQL >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 6.5, master (7.0) > > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-8593: --- Attachment: SOLR-8593.patch For reference here is the latest patch. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10053) TestSolrCloudWithDelegationTokens failures
[ https://issues.apache.org/jira/browse/SOLR-10053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848673#comment-15848673 ] ASF subversion and git services commented on SOLR-10053: Commit 1a3942aade0a0bc3638d492af4e253bd6a625dc5 in lucene-solr's branch refs/heads/branch_6x from [~ichattopadhyaya] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1a3942a ] SOLR-10053: Disabling failing delegation token tests due to HADOOP-14044 > TestSolrCloudWithDelegationTokens failures > -- > > Key: SOLR-10053 > URL: https://issues.apache.org/jira/browse/SOLR-10053 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: fail.log, stdout, stdout, stdout > > > The TestSolrCloudWithDelegationTokens tests fail often at Jenkins. I have > been so far unable to reproduce them using the failing seeds. However, > beasting these tests seem to cause failures (once after about 10-12 runs). > Latest Jenkins failure: > https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/12/ > It wasn't apparent what caused these failures. To cut down the noise on > Jenkins, I propose that we disable the test with @AwaitsFix (or bad apple) > annotation and continue to debug and fix this test. > WDYT, [~markrmil...@gmail.com]? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10053) TestSolrCloudWithDelegationTokens failures
[ https://issues.apache.org/jira/browse/SOLR-10053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848671#comment-15848671 ] ASF subversion and git services commented on SOLR-10053: Commit 0d52cb9cead290b299b00eb64d43e52c52ccec54 in lucene-solr's branch refs/heads/master from [~ichattopadhyaya] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0d52cb9 ] SOLR-10053: Disabling failing delegation token tests due to HADOOP-14044 > TestSolrCloudWithDelegationTokens failures > -- > > Key: SOLR-10053 > URL: https://issues.apache.org/jira/browse/SOLR-10053 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: fail.log, stdout, stdout, stdout > > > The TestSolrCloudWithDelegationTokens tests fail often at Jenkins. I have > been so far unable to reproduce them using the failing seeds. However, > beasting these tests seem to cause failures (once after about 10-12 runs). > Latest Jenkins failure: > https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/12/ > It wasn't apparent what caused these failures. To cut down the noise on > Jenkins, I propose that we disable the test with @AwaitsFix (or bad apple) > annotation and continue to debug and fix this test. > WDYT, [~markrmil...@gmail.com]? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848669#comment-15848669 ] Kevin Risden commented on SOLR-8593: I pushed a fix for the testJDBCMethods error above. [~joel.bernstein] - Sounds good let me know when you are happy with your testing. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10053) TestSolrCloudWithDelegationTokens failures
[ https://issues.apache.org/jira/browse/SOLR-10053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848659#comment-15848659 ] Ishan Chattopadhyaya commented on SOLR-10053: - Thanks [~hgadre] for looking into it. I'll disable the test for now. > TestSolrCloudWithDelegationTokens failures > -- > > Key: SOLR-10053 > URL: https://issues.apache.org/jira/browse/SOLR-10053 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: fail.log, stdout, stdout, stdout > > > The TestSolrCloudWithDelegationTokens tests fail often at Jenkins. I have > been so far unable to reproduce them using the failing seeds. However, > beasting these tests seem to cause failures (once after about 10-12 runs). > Latest Jenkins failure: > https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/12/ > It wasn't apparent what caused these failures. To cut down the noise on > Jenkins, I propose that we disable the test with @AwaitsFix (or bad apple) > annotation and continue to debug and fix this test. > WDYT, [~markrmil...@gmail.com]? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-10053) TestSolrCloudWithDelegationTokens failures
[ https://issues.apache.org/jira/browse/SOLR-10053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya reassigned SOLR-10053: --- Assignee: Ishan Chattopadhyaya > TestSolrCloudWithDelegationTokens failures > -- > > Key: SOLR-10053 > URL: https://issues.apache.org/jira/browse/SOLR-10053 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: fail.log, stdout, stdout, stdout > > > The TestSolrCloudWithDelegationTokens tests fail often at Jenkins. I have > been so far unable to reproduce them using the failing seeds. However, > beasting these tests seem to cause failures (once after about 10-12 runs). > Latest Jenkins failure: > https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/12/ > It wasn't apparent what caused these failures. To cut down the noise on > Jenkins, I propose that we disable the test with @AwaitsFix (or bad apple) > annotation and continue to debug and fix this test. > WDYT, [~markrmil...@gmail.com]? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10079) TestInPlaceUpdatesDistrib failure
[ https://issues.apache.org/jira/browse/SOLR-10079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-10079: Attachment: SOLR-10079.patch Seems like there were segment merges during the commit just preceding the assert, and hence the docids changed. Here's a patch switching the merge policy factory for the test from LogDocMergePolicy to NoMergePolicyFactory. The affected seed passes, and about 500 rounds of beasting on my laptop seems to pass. > TestInPlaceUpdatesDistrib failure > - > > Key: SOLR-10079 > URL: https://issues.apache.org/jira/browse/SOLR-10079 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Ishan Chattopadhyaya > Attachments: SOLR-10079.patch > > > From [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18881/], > reproduces for me: > {noformat} > Checking out Revision d8d61ff61d1d798f5e3853ef66bc485d0d403f18 > (refs/remotes/origin/master) > [...] >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test > -Dtests.seed=E1BB56269B8215B0 -Dtests.multiplier=3 -Dtests.slow=true > -Dtests.locale=sr-Latn-RS -Dtests.timezone=America/Grand_Turk > -Dtests.asserts=true -Dtests.file.encoding=UTF-8 >[junit4] FAILURE 77.7s J2 | TestInPlaceUpdatesDistrib.test <<< >[junit4]> Throwable #1: java.lang.AssertionError: Earlier: [79, 79, > 79], now: [78, 78, 78] >[junit4]> at > __randomizedtesting.SeedInfo.seed([E1BB56269B8215B0:69EF69FC357E7848]:0) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.ensureRtgWorksWithPartialUpdatesTest(TestInPlaceUpdatesDistrib.java:425) >[junit4]> at > org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:142) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >[junit4]> at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >[junit4]> at > java.base/java.lang.reflect.Method.invoke(Method.java:543) >[junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) >[junit4]> at > org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) >[junit4]> at java.base/java.lang.Thread.run(Thread.java:844) > [...] >[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): > {id_i=PostingsFormat(name=LuceneFixedGap), title_s=FSTOrd50, > id=PostingsFormat(name=Asserting), > id_field_copy_that_does_not_support_in_place_update_s=FSTOrd50}, > docValues:{inplace_updatable_float=DocValuesFormat(name=Asserting), > id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), > title_s=DocValuesFormat(name=Lucene70), id=DocValuesFormat(name=Lucene70), > id_field_copy_that_does_not_support_in_place_update_s=DocValuesFormat(name=Lucene70), > inplace_updatable_int_with_default=DocValuesFormat(name=Asserting), > inplace_updatable_int=DocValuesFormat(name=Direct), > inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, > maxPointsInLeafNode=1342, maxMBSortInHeap=6.368734895089348, > sim=RandomSimilarity(queryNorm=true): {}, locale=sr-Latn-RS, > timezone=America/Grand_Turk >[junit4] 2> NOTE: Linux 4.4.0-53-generic i386/Oracle Corporation 9-ea > (32-bit)/cpus=12,threads=1,free=107734480,total=518979584 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848649#comment-15848649 ] Joel Bernstein commented on SOLR-8593: -- Ok [~risdenk], it sounds like you've got a good plan for the merge and commit. Why don't you do the actual merge and commit when we're ready. I'll check back in tomorrow with results from the manual testing. I haven't seen any test failures related to the Calcite work yet. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848648#comment-15848648 ] Kevin Risden commented on SOLR-8593: I need to fix this error: {quote} [beaster]> Throwable #1: org.junit.ComparisonFailure: expected:<[127.0.0.1:55994/solr]> but was:<[metadata]> [beaster]>at __randomizedtesting.SeedInfo.seed([987917FE113810C1:3B8CFDE8F2CD36A]:0) [beaster]>at org.junit.Assert.assertEquals(Assert.java:125) [beaster]>at org.junit.Assert.assertEquals(Assert.java:147) [beaster]>at org.apache.solr.client.solrj.io.sql.JdbcTest.testJDBCMethods(JdbcTest.java:507) {quote} > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[VOTE] Release Lucene/Solr 6.4.1 RC1
Please vote for release candidate 1 for Lucene/Solr 6.4.1. The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.4.1-RC1-rev72f75b2503fa0aa4f0aff76d439874feb923bb0e/ You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py \ https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.4.1-RC1-rev72f75b2503fa0aa4f0aff76d439874feb923bb0e/ Here's my +1 SUCCESS! [0:34:51.203607]
[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index
[ https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848635#comment-15848635 ] samur araujo commented on SOLR-10038: - David, thank you for the last answer. It is clear now. I observed that for very large multipolygon (10MB) the query never finishes, but when I split the multipolygon into many polygons and issue a query for each polygon, it is really fast. Does solr support very large multipolygon as query parameter? Does it internally split the mulipolygon into polygons? Could you clarify this point? > Spatial Intersect Very Slow For Large Polygon and Large Index > - > > Key: SOLR-10038 > URL: https://issues.apache.org/jira/browse/SOLR-10038 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Affects Versions: 6.4 > Environment: Linux Ubuntu + Solr 6.4.0 >Reporter: samur araujo >Assignee: David Smiley > Labels: spatialsearch > > Hi all, I have indexed the entire geonames points (lat/long) with JTS > enabled, and I am trying return all points (geonameids) within a certain > polygon (e.g. Netherlands country polygon). This query takes 3 minutes to > return only 10.000 points. I am using only solr intersect. no facets. no > extra fitering. > Is there any configuration that could slow down such a query to less than 300 > ms? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848628#comment-15848628 ] Kevin Risden commented on SOLR-8593: We shouldn't see the issue that [~julianhyde] raised since we are only using avatica-core. I want to take a look at the testDriverMetadata test where I see it fail once in a while due to order of localhost and metadata schemas being returned. I don't have a preference on who does the merge/commit to master. We should be able to do a squash merge of the branch back to master. I updated the branch once more to get the latest master changes. > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10085) Streaming Expressions result-set fields not in order
[ https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848623#comment-15848623 ] Kevin Risden commented on SOLR-10085: - One thought I had but haven't tested is does the fl parameter allow for specifying the order of fields? Is it possible that fl would work with the streaming expression? > Streaming Expressions result-set fields not in order > > > Key: SOLR-10085 > URL: https://issues.apache.org/jira/browse/SOLR-10085 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.3 > Environment: Windows 8.1, Java 8 >Reporter: Yeo Zheng Lin > Labels: json, streaming > > I'm trying out the Streaming Expressions in Solr 6.3.0. > Currently, I'm facing the issue of not being able to get the fields in the > result-set to be displayed in the same order as what I put in the query. > For example, when I execute this query: > http://localhost:8983/solr/collection1/stream?expr=facet(collection1, > q="*:*", > buckets="id,cost,quantity", > bucketSorts="cost desc", > bucketSizeLimit=100, > sum(cost), > sum(quantity), > min(cost), > min(quantity), > max(cost), > max(quantity), > avg(cost), > avg(quantity), > count(*))=true > I get the following in the result-set. >{ > "result-set":{"docs":[ > { > "min(quantity)":12.21, > "avg(quantity)":12.21, > "sum(cost)":256.33, > "max(cost)":256.33, > "count(*)":1, > "min(cost)":256.33, > "cost":256.33, > "avg(cost)":256.33, > "quantity":12.21, > "id":"01", > "sum(quantity)":12.21, > "max(quantity)":12.21}, > { > "EOF":true, > "RESPONSE_TIME":359}]}} > The fields are displayed randomly all over the place, instead of the order > sum, min, max, avg as in the query. This may cause confusion to user who look > at the output. Possible improvement to display the fields in the result-set > in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10085) Streaming Expressions result-set fields not in order
[ https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848622#comment-15848622 ] Kevin Risden commented on SOLR-10085: - I tried to go down that road of LinkedHashMap for the JDBC driverbut it just seemed no right. You can replace the HashMap implementation behind the scenes but that doesn't guarantee that fields will be inserted in the right order. For the JDBC driver ended up returning a list of fields and the order they should be in from the select. They are presented correctly on the JDBC side when requested by name or position. > Streaming Expressions result-set fields not in order > > > Key: SOLR-10085 > URL: https://issues.apache.org/jira/browse/SOLR-10085 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.3 > Environment: Windows 8.1, Java 8 >Reporter: Yeo Zheng Lin > Labels: json, streaming > > I'm trying out the Streaming Expressions in Solr 6.3.0. > Currently, I'm facing the issue of not being able to get the fields in the > result-set to be displayed in the same order as what I put in the query. > For example, when I execute this query: > http://localhost:8983/solr/collection1/stream?expr=facet(collection1, > q="*:*", > buckets="id,cost,quantity", > bucketSorts="cost desc", > bucketSizeLimit=100, > sum(cost), > sum(quantity), > min(cost), > min(quantity), > max(cost), > max(quantity), > avg(cost), > avg(quantity), > count(*))=true > I get the following in the result-set. >{ > "result-set":{"docs":[ > { > "min(quantity)":12.21, > "avg(quantity)":12.21, > "sum(cost)":256.33, > "max(cost)":256.33, > "count(*)":1, > "min(cost)":256.33, > "cost":256.33, > "avg(cost)":256.33, > "quantity":12.21, > "id":"01", > "sum(quantity)":12.21, > "max(quantity)":12.21}, > { > "EOF":true, > "RESPONSE_TIME":359}]}} > The fields are displayed randomly all over the place, instead of the order > sum, min, max, avg as in the query. This may cause confusion to user who look > at the output. Possible improvement to display the fields in the result-set > in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10054) Core swapping doesn't work with new metrics changes in place
[ https://issues.apache.org/jira/browse/SOLR-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved SOLR-10054. -- Resolution: Fixed > Core swapping doesn't work with new metrics changes in place > > > Key: SOLR-10054 > URL: https://issues.apache.org/jira/browse/SOLR-10054 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (7.0), 6.4.0 >Reporter: Shawn Heisey >Assignee: Andrzej Bialecki > Fix For: 6.4.1, master (7.0) > > Attachments: SOLR-10054.patch, SOLR-10054.patch, solr64coreswap1.png, > solr64coreswap2.png, solr64coreswap3.png > > > The new 6.4.0 version includes some significant changes having to do with > metrics. These changes have broken core swapping. Will attach some > screenshots. > For the screenshots that I will attach, I started Solr directly from the > 6.4.0 download on Windows 7 (bin\solr start). Then I created a "foo" core > and a "bar" core, each from a different configset, using the bin\solr command. > * Screenshot 1: you can see the two cores in CoreAdmin. > * Screenshot 2: Attempting to swap the cores, an error message appears about > a metric already existing for the ping handler. > * Screenshot 3: Clicking somewhere else and then back to CoreAdmin shows > that both cores have the same name -- bar. > * If Solr is stopped and then started back up, the admin UI looks like > screenshot 1 again -- the change that caused two cores with the same name > only took place within the running Solr and did not update core.properties > files. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10067) The Nightly test HdfsBasicDistributedZkTest appears to be too fragile.
[ https://issues.apache.org/jira/browse/SOLR-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848615#comment-15848615 ] ASF subversion and git services commented on SOLR-10067: Commit bbc455de195c83d9f807980b510fa46018f33b1b in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bbc455d ] SOLR-10067: use one datanode for this test to reduce resource usage a bit. > The Nightly test HdfsBasicDistributedZkTest appears to be too fragile. > -- > > Key: SOLR-10067 > URL: https://issues.apache.org/jira/browse/SOLR-10067 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > > HdfsBasicDistributedZkTest 50.00% screwy 30.00 523.5 @Nightly -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10054) Core swapping doesn't work with new metrics changes in place
[ https://issues.apache.org/jira/browse/SOLR-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848616#comment-15848616 ] ASF subversion and git services commented on SOLR-10054: Commit bef725aeefea0ba34bdf9c74b8e67376377e8983 in lucene-solr's branch refs/heads/master from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bef725a ] SOLR-10054: Core swapping doesn't work with new metrics changes in place. > Core swapping doesn't work with new metrics changes in place > > > Key: SOLR-10054 > URL: https://issues.apache.org/jira/browse/SOLR-10054 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (7.0), 6.4.0 >Reporter: Shawn Heisey >Assignee: Andrzej Bialecki > Fix For: 6.4.1, master (7.0) > > Attachments: SOLR-10054.patch, SOLR-10054.patch, solr64coreswap1.png, > solr64coreswap2.png, solr64coreswap3.png > > > The new 6.4.0 version includes some significant changes having to do with > metrics. These changes have broken core swapping. Will attach some > screenshots. > For the screenshots that I will attach, I started Solr directly from the > 6.4.0 download on Windows 7 (bin\solr start). Then I created a "foo" core > and a "bar" core, each from a different configset, using the bin\solr command. > * Screenshot 1: you can see the two cores in CoreAdmin. > * Screenshot 2: Attempting to swap the cores, an error message appears about > a metric already existing for the ping handler. > * Screenshot 3: Clicking somewhere else and then back to CoreAdmin shows > that both cores have the same name -- bar. > * If Solr is stopped and then started back up, the admin UI looks like > screenshot 1 again -- the change that caused two cores with the same name > only took place within the running Solr and did not update core.properties > files. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-6.4 - Build # 16 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.4/16/ 1 tests failed. FAILED: org.apache.solr.handler.admin.MBeansHandlerTest.testDiff Error Message: expected: but was: Stack Trace: org.junit.ComparisonFailure: expected: but was: at __randomizedtesting.SeedInfo.seed([C8742392D49774FA:D62E709C4214C9A]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.admin.MBeansHandlerTest.testDiff(MBeansHandlerTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12011 lines...] [junit4] Suite: org.apache.solr.handler.admin.MBeansHandlerTest [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.4/solr/build/solr-core/test/J2/temp/solr.handler.admin.MBeansHandlerTest_C8742392D49774FA-001/init-core-data-001 [junit4] 2> 1273743 INFO
[jira] [Comment Edited] (SOLR-10085) Streaming Expressions result-set fields not in order
[ https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848576#comment-15848576 ] Joel Bernstein edited comment on SOLR-10085 at 2/1/17 4:23 PM: --- I'm wondering if we can enforce field ordering by using a LinkedHashMap in the Tuples returned by the SelectStream. But we would also probably need to use a LinkedHashMap in the SolrStream as well to maintain the order on parallell requests. I believe [~risdenk] did some work on this at one point for the JDBC driver, but it appears that it wasn't committed. was (Author: joel.bernstein): I'm wondering if we can enforce field ordering by using a LinkedHashMap in the Tuples returned by the SelectStream. But we would also probably need to use a LinkedHashMap in the SolrStream as well to maintain the order on parallell requests. I believe [~risdenk] did some work on this at one point for JDBC drivers, but it appears that it wasn't committed. > Streaming Expressions result-set fields not in order > > > Key: SOLR-10085 > URL: https://issues.apache.org/jira/browse/SOLR-10085 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.3 > Environment: Windows 8.1, Java 8 >Reporter: Yeo Zheng Lin > Labels: json, streaming > > I'm trying out the Streaming Expressions in Solr 6.3.0. > Currently, I'm facing the issue of not being able to get the fields in the > result-set to be displayed in the same order as what I put in the query. > For example, when I execute this query: > http://localhost:8983/solr/collection1/stream?expr=facet(collection1, > q="*:*", > buckets="id,cost,quantity", > bucketSorts="cost desc", > bucketSizeLimit=100, > sum(cost), > sum(quantity), > min(cost), > min(quantity), > max(cost), > max(quantity), > avg(cost), > avg(quantity), > count(*))=true > I get the following in the result-set. >{ > "result-set":{"docs":[ > { > "min(quantity)":12.21, > "avg(quantity)":12.21, > "sum(cost)":256.33, > "max(cost)":256.33, > "count(*)":1, > "min(cost)":256.33, > "cost":256.33, > "avg(cost)":256.33, > "quantity":12.21, > "id":"01", > "sum(quantity)":12.21, > "max(quantity)":12.21}, > { > "EOF":true, > "RESPONSE_TIME":359}]}} > The fields are displayed randomly all over the place, instead of the order > sum, min, max, avg as in the query. This may cause confusion to user who look > at the output. Possible improvement to display the fields in the result-set > in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10038) Spatial Intersect Very Slow For Large Polygon and Large Index
[ https://issues.apache.org/jira/browse/SOLR-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848578#comment-15848578 ] David Smiley commented on SOLR-10038: - You're setting the levels explicitly but I don't recommend doing that -- set it in terms of the largest distance you're willing to handle being the error (maxDistErr). Both prefixTrees will then determine which level to use that satisfies that requirement. > Spatial Intersect Very Slow For Large Polygon and Large Index > - > > Key: SOLR-10038 > URL: https://issues.apache.org/jira/browse/SOLR-10038 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Affects Versions: 6.4 > Environment: Linux Ubuntu + Solr 6.4.0 >Reporter: samur araujo >Assignee: David Smiley > Labels: spatialsearch > > Hi all, I have indexed the entire geonames points (lat/long) with JTS > enabled, and I am trying return all points (geonameids) within a certain > polygon (e.g. Netherlands country polygon). This query takes 3 minutes to > return only 10.000 points. I am using only solr intersect. no facets. no > extra fitering. > Is there any configuration that could slow down such a query to less than 300 > ms? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10085) Streaming Expressions result-set fields not in order
[ https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848576#comment-15848576 ] Joel Bernstein commented on SOLR-10085: --- I'm wondering if we can enforce field ordering by using a LinkedHashMap in the Tuples returned by the SelectStream. But we would also probably need to use a LinkedHashMap in the SolrStream as well to maintain the order on parallell requests. I believe [~risdenk] did some work on this at one point for JDBC drivers, but it appears that it wasn't committed. > Streaming Expressions result-set fields not in order > > > Key: SOLR-10085 > URL: https://issues.apache.org/jira/browse/SOLR-10085 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.3 > Environment: Windows 8.1, Java 8 >Reporter: Yeo Zheng Lin > Labels: json, streaming > > I'm trying out the Streaming Expressions in Solr 6.3.0. > Currently, I'm facing the issue of not being able to get the fields in the > result-set to be displayed in the same order as what I put in the query. > For example, when I execute this query: > http://localhost:8983/solr/collection1/stream?expr=facet(collection1, > q="*:*", > buckets="id,cost,quantity", > bucketSorts="cost desc", > bucketSizeLimit=100, > sum(cost), > sum(quantity), > min(cost), > min(quantity), > max(cost), > max(quantity), > avg(cost), > avg(quantity), > count(*))=true > I get the following in the result-set. >{ > "result-set":{"docs":[ > { > "min(quantity)":12.21, > "avg(quantity)":12.21, > "sum(cost)":256.33, > "max(cost)":256.33, > "count(*)":1, > "min(cost)":256.33, > "cost":256.33, > "avg(cost)":256.33, > "quantity":12.21, > "id":"01", > "sum(quantity)":12.21, > "max(quantity)":12.21}, > { > "EOF":true, > "RESPONSE_TIME":359}]}} > The fields are displayed randomly all over the place, instead of the order > sum, min, max, avg as in the query. This may cause confusion to user who look > at the output. Possible improvement to display the fields in the result-set > in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10085) Streaming Expressions result-set fields not in order
[ https://issues.apache.org/jira/browse/SOLR-10085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848571#comment-15848571 ] Dennis Gove commented on SOLR-10085: There's no guarantee of field order in the tuple. Fields can be added to the tuple at any point in the stream (consider a select after a rollup after a facet after a search). There's no way to know that the user wants a particular order of the fields in the tuple. Beyond that, order of fields in json is not guaranteed. So while the stream handler could try to re-order the fields in the tuple, there is no guarantee that the serializer (or the client's deserializer) will honor that. bq. From RFC 7159 -The JavaScript Object Notation (JSON) Data Interchange Format: An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array. > Streaming Expressions result-set fields not in order > > > Key: SOLR-10085 > URL: https://issues.apache.org/jira/browse/SOLR-10085 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.3 > Environment: Windows 8.1, Java 8 >Reporter: Yeo Zheng Lin > Labels: json, streaming > > I'm trying out the Streaming Expressions in Solr 6.3.0. > Currently, I'm facing the issue of not being able to get the fields in the > result-set to be displayed in the same order as what I put in the query. > For example, when I execute this query: > http://localhost:8983/solr/collection1/stream?expr=facet(collection1, > q="*:*", > buckets="id,cost,quantity", > bucketSorts="cost desc", > bucketSizeLimit=100, > sum(cost), > sum(quantity), > min(cost), > min(quantity), > max(cost), > max(quantity), > avg(cost), > avg(quantity), > count(*))=true > I get the following in the result-set. >{ > "result-set":{"docs":[ > { > "min(quantity)":12.21, > "avg(quantity)":12.21, > "sum(cost)":256.33, > "max(cost)":256.33, > "count(*)":1, > "min(cost)":256.33, > "cost":256.33, > "avg(cost)":256.33, > "quantity":12.21, > "id":"01", > "sum(quantity)":12.21, > "max(quantity)":12.21}, > { > "EOF":true, > "RESPONSE_TIME":359}]}} > The fields are displayed randomly all over the place, instead of the order > sum, min, max, avg as in the query. This may cause confusion to user who look > at the output. Possible improvement to display the fields in the result-set > in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10035) Admin UI cannot find dataimport handlers
[ https://issues.apache.org/jira/browse/SOLR-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848562#comment-15848562 ] ASF subversion and git services commented on SOLR-10035: Commit f51a38fd4cd5f98da3a26df55970d662227b633a in lucene-solr's branch refs/heads/branch_6x from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f51a38f ] SOLR-10035 Admin UI cannot find dataimport handlers. > Admin UI cannot find dataimport handlers > > > Key: SOLR-10035 > URL: https://issues.apache.org/jira/browse/SOLR-10035 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UI >Affects Versions: 6.4 >Reporter: Shawn Heisey >Assignee: Andrzej Bialecki > Labels: regression > Fix For: 6.4.1, master (7.0) > > Attachments: screenshot-1.png, SOLR-10035.patch, SOLR-10035.patch > > > The 6.4.0 version of Solr has a problem with the Dataimport tab in the admin > UI. It will say "Sorry, no dataimport-handler defined" when trying to access > that tab. > The root cause of the problem is a change in the /admin/mbeans handler, by > SOLR-9947. The section of the output where defined dataimport handlers are > listed was changed from QUERYHANDLER to QUERY. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10054) Core swapping doesn't work with new metrics changes in place
[ https://issues.apache.org/jira/browse/SOLR-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848561#comment-15848561 ] ASF subversion and git services commented on SOLR-10054: Commit 8299378eab3282e4dcb14b92645a4f1d214f13cc in lucene-solr's branch refs/heads/branch_6x from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8299378 ] SOLR-10054: Core swapping doesn't work with new metrics changes in place. > Core swapping doesn't work with new metrics changes in place > > > Key: SOLR-10054 > URL: https://issues.apache.org/jira/browse/SOLR-10054 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (7.0), 6.4.0 >Reporter: Shawn Heisey >Assignee: Andrzej Bialecki > Fix For: 6.4.1, master (7.0) > > Attachments: SOLR-10054.patch, SOLR-10054.patch, solr64coreswap1.png, > solr64coreswap2.png, solr64coreswap3.png > > > The new 6.4.0 version includes some significant changes having to do with > metrics. These changes have broken core swapping. Will attach some > screenshots. > For the screenshots that I will attach, I started Solr directly from the > 6.4.0 download on Windows 7 (bin\solr start). Then I created a "foo" core > and a "bar" core, each from a different configset, using the bin\solr command. > * Screenshot 1: you can see the two cores in CoreAdmin. > * Screenshot 2: Attempting to swap the cores, an error message appears about > a metric already existing for the ping handler. > * Screenshot 3: Clicking somewhere else and then back to CoreAdmin shows > that both cores have the same name -- bar. > * If Solr is stopped and then started back up, the admin UI looks like > screenshot 1 again -- the change that caused two cores with the same name > only took place within the running Solr and did not update core.properties > files. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10085) Streaming Expressions result-set fields not in order
Yeo Zheng Lin created SOLR-10085: Summary: Streaming Expressions result-set fields not in order Key: SOLR-10085 URL: https://issues.apache.org/jira/browse/SOLR-10085 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: faceting Affects Versions: 6.3 Environment: Windows 8.1, Java 8 Reporter: Yeo Zheng Lin I'm trying out the Streaming Expressions in Solr 6.3.0. Currently, I'm facing the issue of not being able to get the fields in the result-set to be displayed in the same order as what I put in the query. For example, when I execute this query: http://localhost:8983/solr/collection1/stream?expr=facet(collection1, q="*:*", buckets="id,cost,quantity", bucketSorts="cost desc", bucketSizeLimit=100, sum(cost), sum(quantity), min(cost), min(quantity), max(cost), max(quantity), avg(cost), avg(quantity), count(*))=true I get the following in the result-set. { "result-set":{"docs":[ { "min(quantity)":12.21, "avg(quantity)":12.21, "sum(cost)":256.33, "max(cost)":256.33, "count(*)":1, "min(cost)":256.33, "cost":256.33, "avg(cost)":256.33, "quantity":12.21, "id":"01", "sum(quantity)":12.21, "max(quantity)":12.21}, { "EOF":true, "RESPONSE_TIME":359}]}} The fields are displayed randomly all over the place, instead of the order sum, min, max, avg as in the query. This may cause confusion to user who look at the output. Possible improvement to display the fields in the result-set in the same order as the query -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3811 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3811/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 7 tests failed. FAILED: org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI Error Message: Could not find collection : implicitcoll Stack Trace: org.apache.solr.common.SolrException: Could not find collection : implicitcoll at __randomizedtesting.SeedInfo.seed([AEBD1B8D455DE6D3:C45C95E678C750AB]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:245) at org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.MigrateRouteKeyTest Error Message: java.io.IOException: Couldn't instantiate
[jira] [Commented] (SOLR-10084) Exception on Size Limit for Large Polygon
[ https://issues.apache.org/jira/browse/SOLR-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848555#comment-15848555 ] Erick Erickson commented on SOLR-10084: --- Please raise issues like this on the user's list, many more people will see it and you'll likely get help much more quickly. If it's determined that this is a new problem with Solr code, _then_ you should raise a JIRA. In this case the limit you're seeing is likely configurable at the _container_ level, i.e. Jetty and is not specific to Solr. > Exception on Size Limit for Large Polygon > - > > Key: SOLR-10084 > URL: https://issues.apache.org/jira/browse/SOLR-10084 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial > Environment: ubuntu >Reporter: samur araujo > Labels: spatial, spatialsearch > Fix For: 6.4.1 > > > Large polygon are trunk by solr (or jetty). > Notice that Greece polygon has 11MB in EWKT format. > Sending this polygon in the solr query lead to error because the polygon is > trunk. > I think this should be documented or default limit for the parameter value > configured with a higher limit. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848522#comment-15848522 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 3:36 PM: -- I pulled the most recent work on this branch and precommit is passing. I'll be doing manual testing today and also running the full test suite. It seems like we are getting very close to committing this. I haven't seen the protobuf issue that [~julianhyde] mentioned above. But I'll keep testing and see if anything pops up. [~risdenk], any preference on who does the merge/commit to master on this? was (Author: joel.bernstein): I pulled the most recent work on this branch and precommit is passing. I'll be doing manual testing today and also running the full test suite. It seems like we are getting very close to committing this. I haven't seen the protobuf issue that [~julianhyde] mentioned above. But I'll keep testing and see if anything pops up. [~risdenk], any preference on who does the commit on this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848522#comment-15848522 ] Joel Bernstein edited comment on SOLR-8593 at 2/1/17 3:35 PM: -- I pulled the most recent work on this branch and precommit is passing. I'll be doing manual testing today and also running the full test suite. It seems like we are getting very close to committing this. I haven't seen the protobuf issue that [~julianhyde] mentioned above. But I'll keep testing and see if anything pops up. [~risdenk], any preference on who does the commit on this? was (Author: joel.bernstein): I pulled the most recent work on this branch and precommit is passing. I'll be doing manual testing today and also running the full test suite. It seems like we getting very close to committing this. I haven't seen the issue that [~julianhyde] mentioned above. But I'll keep testing and see if anything pops up. [~risdenk], any preference on who does the commit on this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler
[ https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848522#comment-15848522 ] Joel Bernstein commented on SOLR-8593: -- I pulled the most recent work on this branch and precommit is passing. I'll be doing manual testing today and also running the full test suite. It seems like we getting very close to committing this. I haven't seen the issue that [~julianhyde] mentioned above. But I'll keep testing and see if anything pops up. [~risdenk], any preference on who does the commit on this? > Integrate Apache Calcite into the SQLHandler > > > Key: SOLR-8593 > URL: https://issues.apache.org/jira/browse/SOLR-8593 > Project: Solr > Issue Type: Improvement >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-8593.patch, SOLR-8593.patch > > >The Presto SQL Parser was perfect for phase one of the SQLHandler. It was > nicely split off from the larger Presto project and it did everything that > was needed for the initial implementation. > Phase two of the SQL work though will require an optimizer. Here is where > Apache Calcite comes into play. It has a battle tested cost based optimizer > and has been integrated into Apache Drill and Hive. > This work can begin in trunk following the 6.0 release. The final query plans > will continue to be translated to Streaming API objects (TupleStreams), so > continued work on the JDBC driver should plug in nicely with the Calcite work. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: VOTE: Apache Solr Ref Guide for 6.4
+1. I reviewed the changes to the Streaming Expressions docs. They look good. Joel Bernstein http://joelsolr.blogspot.com/ On Tue, Jan 31, 2017 at 11:40 PM, David Smileywrote: > +1 from me. I reviewed the Highlighting section specifically and found a > small error (fixed in Confluence just now) but not enough to warrant a > respin IMO. > > On Tue, Jan 31, 2017 at 10:44 AM Cassandra Targett > wrote: > >> Please vote to release the Apache Solr Ref Guide for Solr 6.4. >> >> Artifacts can be found at: >> https://dist.apache.org/repos/dist/dev/lucene/solr/ref- >> guide/apache-solr-ref-guide-6.4-RC0/ >> >> $ cat apache-solr-ref-guide-6.4-RC0/apache-solr-ref-guide-6.4.pdf.sha1 >> 2e5f27c1aae36fde5717dd01a4495c5e299c9407 apache-solr-ref-guide-6.4.pdf >> >> I'm not a huge fan of releasing with issues found at the last minute >> such as the one Erick E filed last night (SOLR-10061, about issues >> with the CoreAdmin API docs), but in this case I have a bunch of other >> stuff to do this week & next at the day job, I don't know the >> CoreAdmin API that well, and when SOLR-8029 (v2 API) is backported to >> 6.x, we will likely revamp ALL of the API pages, including that one. >> >> So, that said, here's +1 from me. >> >> Cassandra >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> -- > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker > LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www. > solrenterprisesearchserver.com >
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848458#comment-15848458 ] Mark Miller commented on SOLR-10032: In some cases, because the machine I have been using has no swap space, RAM usage could be an issue in the first report. I'll be adjusting and reducing load in the next run. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10084) Exception on Size Limit for Large Polygon
[ https://issues.apache.org/jira/browse/SOLR-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848446#comment-15848446 ] samur araujo commented on SOLR-10084: - {'error'=>{'metadata'=>['error-class','org.apache.solr.common.SolrException','root-error-class','org.apache.solr.common.SolrException'],'msg'=>'application/x-www-form-urlencoded content length (2430705 bytes) exceeds upload limit of 2048 KB','code'=>400}} > Exception on Size Limit for Large Polygon > - > > Key: SOLR-10084 > URL: https://issues.apache.org/jira/browse/SOLR-10084 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial > Environment: ubuntu >Reporter: samur araujo > Labels: spatial, spatialsearch > Fix For: 6.4.1 > > > Large polygon are trunk by solr (or jetty). > Notice that Greece polygon has 11MB in EWKT format. > Sending this polygon in the solr query lead to error because the polygon is > trunk. > I think this should be documented or default limit for the parameter value > configured with a higher limit. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10084) Exception on Size Limit for Large Polygon
samur araujo created SOLR-10084: --- Summary: Exception on Size Limit for Large Polygon Key: SOLR-10084 URL: https://issues.apache.org/jira/browse/SOLR-10084 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: spatial Environment: ubuntu Reporter: samur araujo Fix For: 6.4.1 Large polygon are trunk by solr (or jetty). Notice that Greece polygon has 11MB in EWKT format. Sending this polygon in the solr query lead to error because the polygon is trunk. I think this should be documented or default limit for the parameter value configured with a higher limit. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7638) Optimize graph query produced by QueryBuilder
[ https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Ferenczi updated LUCENE-7638: - Attachment: LUCENE-7638.patch I pushed a new patch that changes how we build boolean graph query with multi-term synonyms. It first finds the articulation points of the graph and builds a boolean query for each point. The articulation points (or cut vertices) are computed using the algorithm described in: https://en.wikipedia.org/wiki/Biconnected_component This means that each time we find a state where side paths of different lengths start, we generate all path that start at this state and end at the next articulation points. If {quote}QueryBuilder#autoGenerateMultiTermSynonymsPhraseQuery{quote} is set to true, a phrase query is generated for each path, otherwise a boolean query. [~mattweber] [~mikemccand] can you take a look ? > Optimize graph query produced by QueryBuilder > - > > Key: LUCENE-7638 > URL: https://issues.apache.org/jira/browse/LUCENE-7638 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi > Attachments: LUCENE-7638.patch, LUCENE-7638.patch > > > The QueryBuilder creates a graph query when the underlying TokenStream > contains token with PositionLengthAttribute greater than 1. > These TokenStreams are in fact graphs (lattice to be more precise) where > synonyms can span on multiple terms. > Currently the graph query is built by visiting all the path of the graph > TokenStream. For instance if you have a synonym like "ny, new york" and you > search for "new york city", the query builder would produce two pathes: > "new york city", "ny city" > This can quickly explode when the number of multi terms synonyms increase. > The query "ny ny" for instance would produce 4 pathes and so on. > For boolean queries with should or must clauses it should be more efficient > to build a boolean query that merges all the intersections in the graph. So > instead of "new york city", "ny city" we could produce: > "+((+new +york) ny) +city" > The attached patch is a proposal to do that instead of the all path solution. > The patch transforms multi terms synonyms in graph query for each > intersection in the graph. This is not done in this patch but we could also > create a specialized query that gives equivalent scores to multi terms > synonyms like the SynonymQuery does for single term synonyms. > For phrase query this patch does not change the current behavior but we could > also use the new method to create optimized graph SpanQuery. > [~mattweber] I think this patch could optimize a lot of cases where multiple > muli-terms synonyms are present in a single request. Could you take a look ? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10070) The test HdfsDirectoryFactoryTest appears to be unreliable.
[ https://issues.apache.org/jira/browse/SOLR-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848430#comment-15848430 ] Mark Miller commented on SOLR-10070: This may have been a dns issue or overload. I reran the test for 30 iterations, 8 (instead of 12) at a time and all passed. Previous failures: {noformat} [junit4] ERROR 0.00s | HdfsDirectoryFactoryTest (suite) <<< [junit4]> Throwable #1: java.net.UnknownHostException: test-beast-1.host.com: test-beast-1.host.com: Temporary failure in name resolution [junit4]>at __randomizedtesting.SeedInfo.seed([D7D981A36E41DDE3]:0) [junit4]>at java.net.InetAddress.getLocalHost(InetAddress.java:1505) [junit4]>at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:190) [junit4]>at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:210) [junit4]>at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2255) [junit4]>at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1449) [junit4]>at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:832) [junit4]>at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742) [junit4]>at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:612) [junit4]>at org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:99) [junit4]>at org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:65) [junit4]>at org.apache.solr.core.HdfsDirectoryFactoryTest.setupClass(HdfsDirectoryFactoryTest.java:56) [junit4]>at java.lang.Thread.run(Thread.java:745) [junit4]> Caused by: java.net.UnknownHostException: test-beast-1.host.com: Temporary failure in name resolution [junit4]>at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) [junit4]>at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) [junit4]>at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) [junit4]>at java.net.InetAddress.getLocalHost(InetAddress.java:1500) {noformat} > The test HdfsDirectoryFactoryTest appears to be unreliable. > --- > > Key: SOLR-10070 > URL: https://issues.apache.org/jira/browse/SOLR-10070 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Critical > > HdfsDirectoryFactoryTest 30.00% unreliable 30.00 63.43 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10065) The Nightly test ConcurrentDeleteAndCreateCollectionTest appears to be too fragile.
[ https://issues.apache.org/jira/browse/SOLR-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848399#comment-15848399 ] ASF subversion and git services commented on SOLR-10065: Commit bc02b0f70d342df00718edf6749ccd707183ffd6 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc02b0f ] SOLR-10065: The Nightly test ConcurrentDeleteAndCreateCollectionTest appears to be too fragile. > The Nightly test ConcurrentDeleteAndCreateCollectionTest appears to be too > fragile. > --- > > Key: SOLR-10065 > URL: https://issues.apache.org/jira/browse/SOLR-10065 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > > ConcurrentDeleteAndCreateCollectionTest 100.00% mission-failed 30.00 134.34 > @Nightly -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848398#comment-15848398 ] ASF subversion and git services commented on SOLR-10032: Commit 730df22e40cdfb51dd466d44332631fa8fa87f42 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=730df22 ] SOLR-10032: Ignore tests that run no test methods. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7580) Spans tree scoring
[ https://issues.apache.org/jira/browse/LUCENE-7580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15720073#comment-15720073 ] Paul Elschot edited comment on LUCENE-7580 at 2/1/17 1:36 PM: -- Some related issues, thanks for these discussions: LUCENE-533 LUCENE-2878 LUCENE-2879 LUCENE-2880 LUCENE-6226 LUCENE-6371 LUCENE-6466 LUCENE-7398 Some related web pages: http://www.gossamer-threads.com/lists/lucene/java-user/33902 March 2006. http://www.gossamer-threads.com/lists/lucene/java-user/53027 September 2007, suggests to: "recurse the spans tree to compose a score based on the type of subqueries (near, and, or, not) and what matched." http://www.gossamer-threads.com/lists/lucene/java-user/60103 April 2008. http://www.flax.co.uk/blog/2016/04/26/can-make-contribution-apache-solr-core-development/ see point 4. How to use BM25: http://opensourceconnections.com/blog/2015/10/16/bm25-the-next-generation-of-lucene-relevation/ was (Author: paul.elsc...@xs4all.nl): Some related issues, thanks for these discussions: LUCENE-533 LUCENE-2878 LUCENE-2879 LUCENE-2880 LUCENE-6371 LUCENE-6466 LUCENE-7398 Some related web pages: http://www.gossamer-threads.com/lists/lucene/java-user/33902 March 2006. http://www.gossamer-threads.com/lists/lucene/java-user/53027 September 2007, suggests to: "recurse the spans tree to compose a score based on the type of subqueries (near, and, or, not) and what matched." http://www.gossamer-threads.com/lists/lucene/java-user/60103 April 2008. http://www.flax.co.uk/blog/2016/04/26/can-make-contribution-apache-solr-core-development/ see point 4. How to use BM25: http://opensourceconnections.com/blog/2015/10/16/bm25-the-next-generation-of-lucene-relevation/ > Spans tree scoring > -- > > Key: LUCENE-7580 > URL: https://issues.apache.org/jira/browse/LUCENE-7580 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: master (7.0) >Reporter: Paul Elschot >Priority: Minor > Fix For: 6.x > > Attachments: LUCENE-7580.patch, LUCENE-7580.patch, LUCENE-7580.patch, > LUCENE-7580.patch > > > Recurse the spans tree to compose a score based on the type of subqueries and > what matched -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7651) Javadocs build fails with Java 8 update 121
[ https://issues.apache.org/jira/browse/LUCENE-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15848332#comment-15848332 ] Uwe Schindler commented on LUCENE-7651: --- BTW, the release notes of Java 8u121 was updated to mention this change: http://www.oracle.com/technetwork/java/javase/8u121-relnotes-3315208.html It still breaks our previous releases, but we can tell people that its caused by Oracle, not us. > Javadocs build fails with Java 8 update 121 > --- > > Key: LUCENE-7651 > URL: https://issues.apache.org/jira/browse/LUCENE-7651 > Project: Lucene - Core > Issue Type: Bug > Components: general/javadocs >Affects Versions: 6.4 > Environment: Java 8 update 121 >Reporter: Uwe Schindler >Assignee: Uwe Schindler >Priority: Critical > Labels: Java8 > Fix For: 6.x, master (7.0), 6.5, 6.4.1 > > Attachments: LUCENE-7651.patch, LUCENE-7651.patch, LUCENE-7651.patch, > LUCENE-7651.patch > > > Oracle released the recent Java 8 security update (u121). The Jenkins builds > fail with the following error while building the Javadocs: > {noformat} > [javadoc] Constructing Javadoc information... > [javadoc] javadoc: error - Argument for -bottom contains JavaScript. > [javadoc] Use --allow-script-in-comments to allow use of JavaScript. > [javadoc] 1 error > {noformat} > This is caused by the Javascript added to pretty-print code examples. We load > this in the page footer "{{}}" parameter. > Surely, it will be posisble to simply add the mentioned argument, but this > will break builds with earlier Java 8 versions. > This is nowhere documented, I haven't seen any documentation about this flag > nowhere, so I assume this is a bug in Java. They can't change or add command > line parameters in minor updates of Java 8. I will ask on the OpenJDK mailing > lists if this is a bug (maybe accidentally backported from Java 9). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org